MPI Flashcards

(16 cards)

1
Q

What is the end goal of MPI?

A

Promote process parallelism through explicit communication in distributed memory environments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the basic definition of shared-memory architecture

A

All Processes/Threads on a node share a common address space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the basic definition of distributed-memory architecture

A

Each process has its own private memory; processes communicate through message passing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does the mpirun command does?

A

It forks the required number of processes and executes your commands on each process

It also manages I/O forwarding, environment variables, and inter-process communication setup.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do we initialize MPI? Why is it important?

A

We initialize MPI through the command: MPI_Init or MPI_Init_thread (for multithreading)

It is important because this call sets up communication buffers and establishes the MPI environment for all ranks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do we query the communicator? Why is it important?

A

Using the commands: MPI_Comm_size and MPI_Comm_rank

This way each process can learn how much processes are participating and get its unique identifier what, in the end, lets you partition and coordinate the work

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do we perform communication (point to point and collective)? Why is it important?

A

Using the commands:
- MPI_Send and MPI_Recv for point to point blocking communication
- MPI_Isend and MPI_Irecv for point to point non-blocking communication
- MPI_Bcast, MPI_Reduce, MPI_Alltoall, MPI_Barrier for collective communication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the threads support levels?

A

Thread support levels can be specialized when initializing MPI with threading (MPI_Init_thread) and they are:
- MPI_THREAD_SINGLE (single thread, equivalent to MPI_Init)
- MPI_THREAD_FUNNELED (only main thread calls MPI)
- MPI_THREAD_SERIALIZED (multiple threads may call MPI, but not concurrently)
- MPI_THREAD_MULTIPLE (fully concurrent MPI calls allowed)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why are the MPI communicators important?

A

An MPI_Communicator defines both a set of processes that can exchange messages and a unique communication context to keep those messages isolated from other communicatiors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the possible problems in blocking operations?

A

Mainly deadlocks and poor overlap (program waits the command to finish)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are collective operations? Why are they useful?

A

Collective operations envolve all processes in a communicator.

They are important because they typically yield a better performance by exploiting optimized algorithms and avoiding explicit pairwise code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

When can a process return from collective operations call? Why?

A

Until all ranks in the communicator have invoked the matching operation

Because this way it is safe for the user to read or write any involved buffers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the result of duplication a communicator? What about splitting a communicator?

A

The MPI_Comm_dup command creates an identical communicator with the same group of processes and context

The MPI_Comm_split command partition the processes of the original communicator into disjoint subcommunicators based on color

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How can we create a barrier with MPI commands? What are the results and goals of its usage?

A

Using the command MPI_Barrier

It introduces a synchronization point where each process blocks until all ranks in the communicator have called the barrier - guaranteeing that the previous calls are logically complete before moving on

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the data transfer patterns in MPI?

A

They are:
- MPI_Bcast
- MPI_Scatter: Root distributes elements to processes
- MPI_Gather: each ranks sends elements to the root (MPI_Gatherv allows for variable receive counts; MPI_Allgather deliveries the gathered buffer to all ranks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How can we use the reduce patter in MPI? What it does?

A

We can use the reduce pattern in MPI by calling MPI_Reduce

This function combines data across ranks via an associative operation