Message Passing Interface Flashcards

1
Q

Purpose of MPI_Init

A

Initializes MPI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Purpose of MPI_Finalize

A

Terminates MPI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What do MPI functions return?

A

MPI_SUCCESS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a communicator?

A

Abstract description of a group of processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is MPI_COMM_WORLD?

A

Describes all the processes involved in your parallel run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a processor rank?
How does one find the rank of a process within a communicator?

A

A unique identifier assigned to each process.

MPI_Comm_rank()

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How are ranks numbered within a communicator?

A

From zero to the number of processes minus 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does MPI_Comm_Split do?

A

Divides the communicator into disjoint sub communications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What role does the color argument to MPI_Comm_Split perform?

A

Controls of subset assignment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What role does the key argument to MPI_Comm_Split perform?

A

Control the ranking of the divided processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is global numbering?

A

Numbering the processes in the MPI World (all communicators).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is local numbering

A

Numbering the processes within a particular communicator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does it mean when an MPI program is loosely synchronous?

A

Tasks synchronize to perform interactions, otherwise, run asynchronously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What 3 pieces of information are used to pass sending data to a collective communication operation?

A

Send Buffer
Count
Data Type

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What function does MPI_barrier perform?

A

Blocks all processes until they all reach the barrier wall, synchronizing the processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What function does MPI_Bcast perform?

A

Communicates data from one process to other processes.

17
Q

What function does MPI_Reduce perform?

A

Bring data from all processes into one single item.

18
Q

What is the difference between MPI_Reduce and MPI_Allreduce?

A

Allreduce returns the single result back to all the processes.

19
Q

What does MPI_Scatter do?

A

Each process receives a different subset of data.

20
Q

What does MPI_Gather do?

A

Collect data from all processes to a single root process.

21
Q

What does MPI_Alltoall do?

A

A collection of simultaneous broadcasts and gathers.

22
Q

What does MPI_Scan do?

A

Performs a running reduction, but it keeps the partial results.

23
Q

What does it mean if message-passing operations are buffered?

A

A send operation can complete no matter whether the receive has been posted.

24
Q

What does it mean if message-passing operations are blocking?

A

The sending process is blocked until the receiving process has received the message.

25
How many calls does it take to receive a message using a non-blocking protocol?
2 Calls: The first call initiates the receive operation and specifies the buffer size where the message will be stored. The second call checks whether the receive operation has been completed.
26
What impact does buffering have on program portability?
Different computers have different buffer sizes, so not all buffers are program portable.
27
What techniques can be used to avoid deadlock in MPI communication?
Use nonblocking communication Use tie-breaking to coordinate communication Use MPI_Sendrecv call to break dependencies between send and recv calls.