MPI Flashcards
(16 cards)
What is the end goal of MPI?
Promote process parallelism through explicit communication in distributed memory environments
What is the basic definition of shared-memory architecture
All Processes/Threads on a node share a common address space
What is the basic definition of distributed-memory architecture
Each process has its own private memory; processes communicate through message passing
What does the mpirun command does?
It forks the required number of processes and executes your commands on each process
It also manages I/O forwarding, environment variables, and inter-process communication setup.
How do we initialize MPI? Why is it important?
We initialize MPI through the command: MPI_Init or MPI_Init_thread (for multithreading)
It is important because this call sets up communication buffers and establishes the MPI environment for all ranks
How do we query the communicator? Why is it important?
Using the commands: MPI_Comm_size and MPI_Comm_rank
This way each process can learn how much processes are participating and get its unique identifier what, in the end, lets you partition and coordinate the work
How do we perform communication (point to point and collective)? Why is it important?
Using the commands:
- MPI_Send and MPI_Recv for point to point blocking communication
- MPI_Isend and MPI_Irecv for point to point non-blocking communication
- MPI_Bcast, MPI_Reduce, MPI_Alltoall, MPI_Barrier for collective communication
What are the threads support levels?
Thread support levels can be specialized when initializing MPI with threading (MPI_Init_thread) and they are:
- MPI_THREAD_SINGLE (single thread, equivalent to MPI_Init)
- MPI_THREAD_FUNNELED (only main thread calls MPI)
- MPI_THREAD_SERIALIZED (multiple threads may call MPI, but not concurrently)
- MPI_THREAD_MULTIPLE (fully concurrent MPI calls allowed)
Why are the MPI communicators important?
An MPI_Communicator defines both a set of processes that can exchange messages and a unique communication context to keep those messages isolated from other communicatiors.
What are the possible problems in blocking operations?
Mainly deadlocks and poor overlap (program waits the command to finish)
What are collective operations? Why are they useful?
Collective operations envolve all processes in a communicator.
They are important because they typically yield a better performance by exploiting optimized algorithms and avoiding explicit pairwise code
When can a process return from collective operations call? Why?
Until all ranks in the communicator have invoked the matching operation
Because this way it is safe for the user to read or write any involved buffers
What is the result of duplication a communicator? What about splitting a communicator?
The MPI_Comm_dup command creates an identical communicator with the same group of processes and context
The MPI_Comm_split command partition the processes of the original communicator into disjoint subcommunicators based on color
How can we create a barrier with MPI commands? What are the results and goals of its usage?
Using the command MPI_Barrier
It introduces a synchronization point where each process blocks until all ranks in the communicator have called the barrier - guaranteeing that the previous calls are logically complete before moving on
What are the data transfer patterns in MPI?
They are:
- MPI_Bcast
- MPI_Scatter: Root distributes elements to processes
- MPI_Gather: each ranks sends elements to the root (MPI_Gatherv allows for variable receive counts; MPI_Allgather deliveries the gathered buffer to all ranks
How can we use the reduce patter in MPI? What it does?
We can use the reduce pattern in MPI by calling MPI_Reduce
This function combines data across ranks via an associative operation