MPI Flashcards

(32 cards)

1
Q

What is the function used to initialise the MPI execution environment?

A

int MPI_Init(int *argc, char ***argv)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the function used to finalise the MPI execution environment?

A

int MPI_Finalize()

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does int MPI_Comm_size(MPI_Comm comm, int *size) do?

A

Reports the number of MPI processes in the specified communicator, into the size variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does int MPI_Comm_rank(MPI_Comm comm, int *rank) do?

A

Reports the rank of the calling process in the specified communicator, into the rank variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the range of ranks for MPI processes?

A

From 0 to size-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the predefined communicator that refers to all concurrent processes in an MPI program?

A

MPI_COMM_WORLD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the two basic functions for point-to-point communication in MPI?

A

MPI_Send and MPI_Recv

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the purpose of the MPI_Send function? What is the full function definition?

A
  • Sends a message to another process
  • int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the purpose of the MPI_Recv function? What is the full function definition?

A
  • Receives a message from another process
  • int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

List some MPI C data types.

A
  • MPI_CHAR
  • MPI_INT
  • MPI_FLOAT
  • MPI_DOUBLE
  • etc
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are collectives? Why use them?

A
  • Operations for communicating within a group of processes specified by a communicator
  • Collectives can make MPI programming more efficient
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the purpose of MPI_Bcast? What is the full function definition?

A
  • Broadcasts data from one process to all others in a communicator (one-to-all)
  • int MPI_Bcast(void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the purpose of MPI_Scatter? What is the full function definition?

A
  • Scatters data from one process to all others in a communicator
  • Each process receives a subset of the data
  • int MPI_Scatter(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the purpose of MPI_Gather? What is the full function definition?

A
  • Gathers data from all processes in a communicator to one process
  • int MPI_Gather(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the purpose of MPI_Allgather? What is the full function definition?

A
  • Gathers data from all processes in a communicator to all processes
  • All processes recieve the result
  • int MPI_Allgather(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the purpose of MPI_Reduce? What is the full function definition?

A
  • Reduces a variable and returns result to specified process
  • int MPI_Reduce(const void *sendbuf, void *recvbuf, int count, MPI_Datatype type, MPI_Op op, int root, MPI_Comm comm)
17
Q

What is the purpose of MPI_Allreduce? What is the full function definition?

A
  • Reduces a variable and returns result to all processes
  • int MPI_Allreduce(const void *sendbuf, void *recvbuf, int count, MPI_Datatype type, MPI_Op op, MPI_Comm comm)
18
Q

What command is used to compile an MPI program in C?

A

mpicc -o program program.c

19
Q

What command is used to run an MPI process?

A

mpirun -np 5 program

5 is the number of processes

20
Q

What is blocking communication in MPI? What is a benefit?

A
  • Communication where execution block until the message is received or safe to change buffers
  • It helps avoid overwriting buffers before it is safe to do so
21
Q

What are non-blocking communications? Why can they be good? Why can they be bad?

A
  • They allow simultaneous computation and communication
  • Helps avoid deadlocks
  • We need to ensure the program works correctly and does not overwrite buffers
22
Q

What is the purpose of MPI_Isend? What is the full function definition?

A
  • Sends a message without blocking
  • int MPI_Isend(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)
23
Q

What is the purpose of MPI_Irecv? What is the full function definition?

A
  • Receives a message without blocking
  • int MPI_Irecv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request *request)
24
Q

What is the purpose of MPI_Wait? What is the full function definition?

A
  • Waits for a non-blocking request to complete
  • int MPI_Wait(MPI_Request *request, MPI_Status *status)
25
What is the purpose of `MPI_Waitall`? What is the full function definition?
- Waits for all non-blocking requests to complete - `int MPI_Waitall(int count, MPI_Request array_of_requests[], MPI_Status array_of_statuses[])`
26
How do collectives affect non-blocking communication?
A blocking collective synchronises all the processes in the communicator
27
What is Domain Decomposition in MPI?
A method to distribute work by dividing the computational domain among different MPI processes
28
How is a PDE solver typically set up using MPI collectives?
1. Rank zero reads input parameters and *broadcasts* them 2. Rank zero reads initial conditions and *scatters* them 3. Rank zero *gathers* results and writes output
29
What is the significance of using the same timestep in MPI PDE solvers?
All processes must use the same timestep - the *minimum value* across the whole domain
30
What are halos in the context of finite difference stencils?
Copies of rows/columns held by another MPI process to facilitate communication at domain boundaries
31
How often do halos need to be updated?
Once per timestep
32
What is a potential issue when using `MPI_Send` and `MPI_Recv` for halo communication?
It can incur significant overhead