Chapter 3 - MPI Flashcards
What are the two types of systems in the world of MIMD?
Distributed memory: The memory associated with a core is only accessible to that core.
Shared memory: Collection of cores connected to a global shared memory.
What is message passing?
Way of communication for distributed memory systems.
One process running on one cores communicates through a send() call, another process on another core calls receive() to get the message.
What is MPI
Message Passing Interface
Library implementation of message passing communication
What are collective communication?
Functions that allow for communication between multiple (more than 2) processes.
What is a rank?
A non-negative identifier of a process running in MPI.
n number of processes -> 0, 1, …, (n-1) ranks
What library must be included to use MPI
include <mpi.h></mpi.h>
How is MPI initialized?
MPI_Init(int* argc_p, char*** argv_p);
argc_p and argv_p: pointers to arguments to main, argc and argv
If program does not use these, NULL are passed to both
In main() using arguments:
MPI_Init(&argc, &argv);
How do you get the communicator size in MPI?
MPI_Comm_size(MPI_Comm comm_name, int* size)
What is the name of the global communicator?
MPI_COMM_WORLD
Set up by MPI_Init();
How do you get a process’s rank within a communicator?
MPI_Comm_rank(MPI_Comm comm_name, int* rank)
How is a MPI program finalized?
MPI_Finalize(void);
Any resource allocated to MPI is freed.
What does
mpiexec -n <n> ./program</n>
do when compiling a MPI program?
mpiexec tells the system to run the program with <n> instances of the program</n>
What is a MPI communicator?
A collection of processes that can send messages to each other.
How can MPI produce SPMD programs?
Processes branch out doing different tasks based on their ranks.
if-else
rank = 0 can print, rank = 1 send, rank = 2 receive
What is the syntax of MPI_Send()
MPI_Send(
void* buffer,
int message_size,
MPI_Datatype type,
int dest,
int tag,
MPI_Comm communicator
):
buffer: holds the content of the message to be sent
size: Number of elements to send from the buffer
type: MPI_CHAR, MPI_DOUBLE, etc.
dest: Destination rank, who is receiving the message
tag: non-negative int, can be used to destinguish messages that are otherwise identical
What is the syntax of MPI_Recv
int MPI_Recv(
void* msg_buf,
int size,
MPI_Datatype type,
int source,
int tag,
MPI_Comm communicator,
MPI_Status* status
)
msg_buf: Buffer to receive message in
size: Number of elements to receive
type: Types of elements in message
source: Source rank, rank that sent message
tag: Tag should match the tag from the send
communicator: Must match the communicator at the send
status: When not using status MPI_STATUS_IGNORE is passed
What conditions must be met for a MPI to be successfully sent by process a and received by process b?
dest = b
src = a
comm_a = comm_b
tag_a = tag_b
buffer_a|buffer_b + size_a|size_b + type_a|type_b must be compatible
Most of the time, if type_a=type_b and size_b >= size_a, the message will be successfully received
What is a wildcard argument in MPI communication
If a receiver will be receiving multiple messages from multiple source ranks, and it does not know the order it will receive, it can loop through all the MPI_recv() calls and pass the wildcard argument: MPI_ANY_SOURCE to allow any order of ranks to send messages.
Similarly, if a process will receive multiple messages from another process, but with different tags, it can do the same but with the wildcard argument MPI_ANY_TAG
Only receivers can use wildcard arguments
There is no communicator wildard argument
What is MPI_Status used in MPI_Recv?
A struct with atleast the three members:
MPI_SOURCE
MPI_TAG
MPI_ERROR
Before recv call, create status pointer:
MPI_Status status;
MPI_Recv(…, &status);
These are useful if a process uses wildcards and now need to figure out either the source or tag of a message. These attributes can then be examined.
What is MPI_Get_count() used for?
Used to figure out how many elements of the provided type was received in the message
MPI_Get_count(
MPI_Status* status, (in)
MPI_Datatype type, (in)
int* count (out)
)
status: Status struct passed to recv()
type: Type passed in recv
count: Number of elements received in the message
What happens if the process buffers a MPI_Send message?
MPI puts the complete message into its internal storage.
The MPI_Send() call will return.
The message might not be in transmission yet, but as it is now stored internally, we can use the send_buffer for other purposes if we want to.
What happens if the process block the MPI_Send message?
The process will wait until it can begin transmitting the message.
The MPI_Send() call might not return immediately.
When will MPI_Send block?
It depends on the MPI implementation.
But many implementations have a “cutoff” message size. If the size is within this, it will be buffered. If it exceeds the cutoff-size the message will block.
Does MPI_Recv block?
Yes, unlike MPI_Send, when MPI_Recv returns we know the message has been fully received.