Introduction to MPI Flashcards

(36 cards)

1
Q

Which two functions need to be called to initialise and finalise the MPI environment?

A
  1. MPI_Init to initialise the environment
  2. MPI_Finalize to cleanly shut down the MPI environment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which function is used to find the number of processes there are?

A

MPI_Comm_size returns the number of processes currently running

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which function is used to find the current process rank?

A

MPI_Comm_rank returns the rank of the calling process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a communicator?

A

A communicator is a grouping of processes. There is a predefined global communicator known as MPI_COMM_WORLD containing all processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which variable is accepted by MPI_Comm_size and MPI_Comm_rank?

A

comm - the communicator that is being queried.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which compile command is used for MPI?

A

mpicc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which command is used to run an MPI process?

A

mpirun - starts multiple instances of the executable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How is the number of MPI processes specified?

A

A flag is passed to mpirun which specified the number of processes to start.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do processes communicate in MPI?

A

By passing messages

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is point-to-point communication?

A

Message passing between a pair of processes in a communicator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which MPI functions are used for point-to-point messaging?

A
  1. MPI_Send
  2. MPI_Recv
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does MPI_Send do?

A

It sends a message to another process describing the data to be sent and the connection to be made.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does MPI_Recv do?

A

It receives a message from another process describing the data to be received and the connection to be made.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How are message lengths specified?

A

In terms of MPI data types rather than in bytes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is collective communicaiton?

A

An alternative to point-to-point messaging that allows for communication to all processes in a communicator in one command.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the 4 types of MPI collective functions?

A
  1. MPI_Bcast
  2. MPI_Scatter
  3. MPI_Gather/ALLgather
  4. MPI_Reduce/ALLreduce
17
Q

What does MPI_Bcast do?

A

Send data from one process to all the others in the specified communicator (one-to-all). All processes receive the same data.

18
Q

What does MPI_Scatter do?

A

Distributes data to all the others in a communicator. Each processes receives a subset of the data.

19
Q

What does MPI_Gather do?

A

Gather data from all processes in a communicator to one process (all-to-one). Each process contributes a subset of the data received.

20
Q

What does MPI_Allgather do?

A

Gathers data from all the processes in a communicator to all processes (all-to-all). Each process contributes a subset of the data received. All the processes receive the result.

21
Q

What does MPI_Reduce do?

A

Carries out a reduction and returns the result to the specified process.

22
Q

What does MPI_Allreduce do?

A

Carries out a reduction and returns the result to all processes

23
Q

What is a blocking communication?

A

MPI_Send and MPI_Recv will block until the message is received by the destination process or until it is safe to change the send buffer.

24
Q

Why are blocking communications used?

A

The behaviour is used to help write correct programs and avoid overwriting a send buffer before it is safe to do so.

25
What is the issue with blocking communications?
If send and receive operations are not correctly matched, there may be a deadlock.
26
What are the non-blocking versions of MPI_Send and MPI_Recv?
MPI_Isend and MPI_Irecv
27
Why are non-blocking communications used?
They are used to avoid deadlocks as you don't have to match send and receives carefully.
28
Why is non-blocking communication more efficient?
There can be an overlap between computation and communication as you can do something else whilst waiting for a message.
29
How are non-blocking communications temporaily blocked?
Using the MPI_Wait or MPI_Waitall commands to wait for the communications to complete.
30
What does MPI_Wait take as input?
The handle for the request to wait for
31
What does MPI_Waitall take as input?
The number of requests to wait for and an array of request handles to wait for
32
What type of communication is collective communication?
Blocking communication. It synchronises all the processes in the communicator.
33
How is work distributed between MPI processed?
Domain Decomposition
34
What do rank zero processes do?
They either read input parameters from files, initial conditions from files, or collect the results from multiple processes.
35
How are differing time steps handled in PDEs?
If the time step is determined by a stability constraint and the velocity varies then the different parts of the domain will have a different maximum allowable time step. The time step needs to be the minimum value from the whole computational domain.
36
How are neighbouring domains handled?
Halos of the values form neighbouring domains are stored and updated then passed when necessary between domains.