Chapter 3 - MPI Flashcards

1
Q

What are the two types of systems in the world of MIMD?

A

Distributed memory: The memory associated with a core is only accessible to that core.

Shared memory: Collection of cores connected to a global shared memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is message passing?

A

Way of communication for distributed memory systems.

One process running on one cores communicates through a send() call, another process on another core calls receive() to get the message.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is MPI

A

Message Passing Interface

Library implementation of message passing communication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are collective communication?

A

Functions that allow for communication between multiple (more than 2) processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a rank?

A

A non-negative identifier of a process running in MPI.

n number of processes -> 0, 1, …, (n-1) ranks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What library must be included to use MPI

A

include <mpi.h></mpi.h>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How is MPI initialized?

A

MPI_Init(int* argc_p, char*** argv_p);

argc_p and argv_p: pointers to arguments to main, argc and argv

If program does not use these, NULL are passed to both

In main() using arguments:
MPI_Init(&argc, &argv);

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do you get the communicator size in MPI?

A

MPI_Comm_size(MPI_Comm comm_name, int* size)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the name of the global communicator?

A

MPI_COMM_WORLD

Set up by MPI_Init();

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you get a process’s rank within a communicator?

A

MPI_Comm_rank(MPI_Comm comm_name, int* rank)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How is a MPI program finalized?

A

MPI_Finalize(void);

Any resource allocated to MPI is freed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does

mpiexec -n <n> ./program</n>

do when compiling a MPI program?

A

mpiexec tells the system to run the program with <n> instances of the program</n>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a MPI communicator?

A

A collection of processes that can send messages to each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How can MPI produce SPMD programs?

A

Processes branch out doing different tasks based on their ranks.

if-else

rank = 0 can print, rank = 1 send, rank = 2 receive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the syntax of MPI_Send()

A

MPI_Send(
void* buffer,
int message_size,
MPI_Datatype type,
int dest,
int tag,
MPI_Comm communicator
):

buffer: holds the content of the message to be sent

size: Number of elements to send from the buffer

type: MPI_CHAR, MPI_DOUBLE, etc.

dest: Destination rank, who is receiving the message

tag: non-negative int, can be used to destinguish messages that are otherwise identical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the syntax of MPI_Recv

A

int MPI_Recv(
void* msg_buf,
int size,
MPI_Datatype type,
int source,
int tag,
MPI_Comm communicator,
MPI_Status* status
)

msg_buf: Buffer to receive message in

size: Number of elements to receive

type: Types of elements in message

source: Source rank, rank that sent message

tag: Tag should match the tag from the send

communicator: Must match the communicator at the send

status: When not using status MPI_STATUS_IGNORE is passed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What conditions must be met for a MPI to be successfully sent by process a and received by process b?

A

dest = b
src = a
comm_a = comm_b
tag_a = tag_b

buffer_a|buffer_b + size_a|size_b + type_a|type_b must be compatible

Most of the time, if type_a=type_b and size_b >= size_a, the message will be successfully received

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is a wildcard argument in MPI communication

A

If a receiver will be receiving multiple messages from multiple source ranks, and it does not know the order it will receive, it can loop through all the MPI_recv() calls and pass the wildcard argument: MPI_ANY_SOURCE to allow any order of ranks to send messages.

Similarly, if a process will receive multiple messages from another process, but with different tags, it can do the same but with the wildcard argument MPI_ANY_TAG

Only receivers can use wildcard arguments

There is no communicator wildard argument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is MPI_Status used in MPI_Recv?

A

A struct with atleast the three members:
MPI_SOURCE
MPI_TAG
MPI_ERROR

Before recv call, create status pointer:
MPI_Status status;
MPI_Recv(…, &status);

These are useful if a process uses wildcards and now need to figure out either the source or tag of a message. These attributes can then be examined.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is MPI_Get_count() used for?

A

Used to figure out how many elements of the provided type was received in the message

MPI_Get_count(
MPI_Status* status, (in)
MPI_Datatype type, (in)
int* count (out)
)

status: Status struct passed to recv()
type: Type passed in recv
count: Number of elements received in the message

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What happens if the process buffers a MPI_Send message?

A

MPI puts the complete message into its internal storage.

The MPI_Send() call will return.

The message might not be in transmission yet, but as it is now stored internally, we can use the send_buffer for other purposes if we want to.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What happens if the process block the MPI_Send message?

A

The process will wait until it can begin transmitting the message.
The MPI_Send() call might not return immediately.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

When will MPI_Send block?

A

It depends on the MPI implementation.

But many implementations have a “cutoff” message size. If the size is within this, it will be buffered. If it exceeds the cutoff-size the message will block.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Does MPI_Recv block?

A

Yes, unlike MPI_Send, when MPI_Recv returns we know the message has been fully received.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What does it mean that MPI messages are non-overtaking?

A

If one process a sends 2 messages to process b, the first message must be available to b before the second one is.

Messages from different processes does not care which was sent first.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is a pitfall with MPI_Recv / MPI_Send in the context of blocking?

A

If the MPI_Recv does not have a matching MPI_Send it will block forever and the program will hang.

The same can happen for a blocking send if it has no matching receiver.

If a MPI_Send if buffered and there are no matching send, the message will be lost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is non-determinism in parallel programs?

A

When the output of a program vary depending on the order of which processes does computations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How can MPI programs implement I/O to avoid non-determinism?

A

Make processes branch on process rank.
E.g. rank 0 can read input and send it to the remaining ranks.

All ranks can send their output to rank 0 who can print it in rank order.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

In MPI, what are collective communications?

A

Communication functions that include all processes in a communicator.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is point-to-point communication?

A

One sender and one receiver

(MPI_Senc | MPI_Recv)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is MPI_Reduce?

A

Implementation of collective communication.
Generalized function that allows different operations on data that is held by all processes in a communicator

Syntax:

MPI_Reduce(
void* input_data_buf,
void* output_data_buf,
int count,
MPI_Datatype type,
MPI_Op operator,
int dest_process,
MPI_Comm comm
)

input_data_buf: local data for the process, this is used in the operation

output_data_buf: buffer to hold the output computation done by the operator

count: Number of elements to do operation on. This allows for e.g. operations on arrays

type

operator: Specifies what operation is to be done on the data

dest_process: rank to receive computed output (?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are some operators available for MPI_Reduce?

A

MPI_SUM: Optimized global sum of all local_ints

MPI_MAX: Finds the largest value from the processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is important to remember when using collective communication

A

All processes must call MPI_Reduce

Arguments passed to a collective must be compatible (e.g. dest rank)

out buffer is only used by dest rank. The other ranks still need to pass the out argument, but this can be NULL for the other processes

Where point-to-point matches on tags and communicators, collective match on communicator and the order they where called.

It is illegal to use the same buffer for input and output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What does itmean to alias arguments?

A

Two arguments are aliased if they refer to the same block of memory.

This is illegal in MPI if one of the is output or input/output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What does MPI_Allreduce do?

A

Optimized collective function that stores the output of reduce in all processes

MPI_Allreduce(
void* input_data_buf,
void* output_data_buf,
int count,
MPI_Datatype type,
MPI_Op operator,
MPI_Comm comm
)

Identical argument list to reduce() without dest rank

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is MPI_Broadcast

A

Collective function that allows a process to send a message to all other processes in a communicator

MPI_Bcast(
void* data_buf,
int count,
MPI_Datatype type,
int source_process,
MPI_Comm comm
)

source_process: The process with rank source_process sends its content of data_buf.

data_buf: Buffer to either send from, or if processes aren’t the source, receive the data in. Acts as both input and output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What does MPI_Scatter do?

A

If a communicator is doing operations on a large vector, and each process is only doing work on certain parts of the vectors, it can be expensive if one process communicates the whole vector to all processes. Because these must then allocate alot of memory to the whole vector, though they only does computations on parts of these.

MPI_Scatter reads in the complete data from one rank and sends only the needed components to the rest of the processes.

MPI_Scatter(
void* send_buf,
int send_count,
MPI_Datatype sent_type,
void* recv_buf,
int recv_count,
MPI_Datatype recv_type,
int src_process,
MPI_Comm comm
)

The function divides the data referenced in send_buf into comm_size pieces. The first piece goes to rank 0, then rank 1, and so on.

send_count needs to be the amount of data going to each process, so not the complete amount of data in send_buf.

A thing to note, is that the complete amount of data must be divisible by the number of ranks in the communicator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What does MPI_Gather do?

A

Function to collect all data-components from all processors into one process to get the complete data, e.g. the complete vector.

MPI_Gather(
void* send_buf,
int send_count,
MPI_Datatype sent_type,
void* recv_buf,
int recv_count,
MPI_Datatype recv_type,
int dest_process,
MPI_Comm comm
)

Same as scatter, but with a destination rank to receive all the data.

Data from rank 0 is stored in the first block of recv_buffer, send_buf in rank 1 is stored in second block of recv_buf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is MPI_Allgather?

A

MPI_Allgather(
void* send_buf,
int send_count,
MPI_Datatype sent_type,
void* recv_buf,
int recv_count,
MPI_Datatype recv_type,
MPI_Comm comm
)

Function concatinates each processes send_buf and stores this in each process’s recv_buf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What are derived datatypes in MPI?

A

Used to represent any collection of data items in memory.

Stores the type of the items, and their relative locations in memory.

Derived datatypes consist of a sequence of basic MPI_Datatypes and a displacement for each type (from the beginning of the type).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is the syntax of MPI_Type_create_struct?

A

MPI_Type_create_struct(
int count,
int array_of_blocklengths[],
int array_of_displacements[],
MPI_Datatype array_of_types[],
MPI_Datatype new_type*
)

count: Number of elements in the type

blocklengths: allows for possibility that the subtypes are arrays. if one element is an array with 5 elements. blocklength = 5

displacement: Each elements displacement from the start if the message

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What does the function MPI_Get_address do?

A

MPI_Get_address(
void* location,
MPI_Aint* address
)

returns address of pointer location

MPI_Aint is used because this is the datatype that is big enough to store an address.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Howcan we get displacements of datatype elements using MPI_Get_address?

A

int a, b, c

MPI_Aint addr_a, addr_b, addr_c

MPI_Get_address(&a, &addr_a)
MPI_Get_address(&b, &addr_b)
MPI_Get_address(&c, &addr_c)

array_of_displacements[0] = 0;

array_of_displacements[1] = addr_b - addr_a;

array_of_displacements[1] = addr_c - addr_a;

44
Q

How are datatypes created in MPI?

A

MPI_Datatype new_type;

MPI_Type_create_struct(
3,
block_lengths,
displacements,
types,
&new_type
)

Then the type must be committed:

MPI_Type_commit(&new_type)

Finished using the type:

MPI_Type_free(&new_type)

45
Q

What does MPI_Type_commit do?

A

Commits a MPI datatype that was created using MPI_Type_create_struct

MPI_Type_commit(
MPI_Datatype* new_type
)

46
Q

What does MPI_Type_free do?

A

When we are finished using a MPI type that we have created, we can free the storage it has used using:

MPI_Type_Free(
MPI_Datatype *used_custom_type
)

47
Q

What does MPI_Barrier do?

A

A collective communication function that is used to synchronize processes.

No process will return from calling it until every process in the communicator has started calling it.

Does not guarantee communication has finished, messages can still be in transit.

written data that is waiting in a buffer, will not be flushed by the barrier. It stays the same.

pending requests for work will not be cleaned up by barriers, you must wait for their completion to finalize them

MPI_Barrier(MPI_Comm comm)

48
Q

What is speedup?

A

Ratio of serial runtime to parallel

S = T_serial / T_parallel

p: number of processes (?)

49
Q

What is linear speedup?

A

When a parallel program running p processes runs p times faster than the serial program.

50
Q

What is efficiency?

A

Per process speedup

S = T_serial / T_parallel

E = S / p = T_serial / p*T_parallel

51
Q

How does linear speedup correspond to parallel efficiency?

A

Linear speadup gives efficiency of p/p = 1

52
Q

What is MPI_PROC_NULL

A

A MPI constant used in point-to-point communication as src/dest rank. When the constant is used, no communication will take place.

53
Q

What is an unsafe program?

A

A program that relies on MPI-buffering to avoid deadlocks when Sends- and receives are waiting for each other.

Unsafe programs may hang, crash or deadlock

54
Q

What is MPI_Ssend?

A

An synchronous MPI_Send call that guarantees to block until the matchin receive starts

Same arguments as MPI_Send

55
Q

How can you check if a program is safe or unsafe?

A

If the MPI_Send calls are replaced with MPI_Ssend we can see if the program hangs. If it does not, the program was safe.

56
Q

What can cause a deadlock in MPI programs, and how can this be resolved?

A

Processes first sending a message annd then waiting to receive. This will cause them to wait in a circle.

A way to solve this is to vary in what order ranks sends or receives.
If half of the ranks sends first and then receives, and the other half first receives and then sende, there will be no deadlock.

57
Q

What is MPI_Sendrecv

A

MPI’s implementation of safe sends- and receives.

MPI_Sendrecv(
void* send_buf,
int send_size
MPI_Datatype send_type,
int dest,
int send_tag,
void* recv_buf,
int recv_size
MPI_Datatype recv_type,
int src,
int recv_tag,
MPI_Comm comm,
MPI_Status* status
)

Guarantees no deadlock

58
Q

What are local and global variables in MPI programs?

A

Local: Values specific to a process

Global: Values available to all processes

59
Q

What are parallel overhead and what causes this in MPI programs?

A

Overhead due to additional work that is not done in serial programs.

In MPI, this would be the work done in communicating between processes

60
Q

When is a parallel program scalable?

A

If you can increase the problemsize n so that efficiency doesn’t decrease as p is increased.

61
Q

In flynn’s taxonomy, what type of programs are MPI programs?

A

SPMD

Can branch by ranks to do different things

62
Q

What are the 4 types of communication modes?

A

Standard: Default for MPI_Send

Synchronized: Send-function will block until reception is acknowledged

Buffered: Explicitly manage the memory that’s used for send/recv

Ready: Assume that the receiver has already initiated the receive when the send() call is made

63
Q

What category of parallel program is MPI?

A

SPMD

P copies of the same program can do different things because of their identity number

64
Q

What is a cartesian communicator?

A

Each rank has a set of coordinates.

Grid structure, 2 neighbours in each direction, up/down/left/right, possibly in 3D

65
Q

What is non-blocking sending and receive?

A

Send() and recv() call immediately returns with a request, so execution can continue.

To make sure if communication was successful, you must issue a wait-for-completion call for the request

66
Q

When a MPI program is launced with multiple processes, what is every process delegated?

A

A full memory space

Stack, heap, data (includes rank), text

67
Q

How are MPI programs run?

A

mpirun -np 4 ./my_program

68
Q

What is a pre-requisite when using collective operations?

A

All ranks in the communicator MUST participate in the collective operation

69
Q

What is a memory fence?

A

An operation that forces all committed work to be completed before continuing

70
Q

What does MPI_Alltoall do?

A

Total exchange - from everyone to everyone

71
Q

Can collective functionality be implemented using point-to-point communication?

A

Yes, all collective functions can be implemented using normal send/recv

72
Q

When is internal buffering not faster (when buffering sends)

A

When the message exceeds a size making a copy of the message takes longer than sending message right away.

After this message size, MPI_Send will switch to blocking mode

73
Q

What does MPI_Ssend() do?

A

Synchronized mode of Send

Does not return until receiver starts receiving

Synchronizes progress between communicating processes

74
Q

What is MPI_Bsend?

A

Buffered Send mode

Lets you allocate buffer yourself, so that you can make it long and contiguous in memory

Useful when you’re sending a lot of tiny messages at a time. This usually causes tiny buffer allocations and deallocations, which takes time and fragments heap-memory

Buffer must be registered before the send call

MPI_Buffer_attach(buffer, buffer_size)

MPI_Buffer_detach(&buffer, &buffer_size)

75
Q

What is MPI_Rsend?

A

Has liberty to bypass protocals that establish whether the recipient is ready.

Can be used when the programmer is 100% sure that the receiver has already made the receive call

76
Q

What is MPI_Isend?

A

MPI_Isend(
void* buffer,
int count,
MPI_Datatype type,
int dest,
int tag,
MPI_Comm communicator,
MPI_Request *request
)

Return immediately. Message sending gets put in the background and executed later at MPIs own convenience

Program can do something else in the meantime

MPI_Wait(MPI_Request *req, MPI_Status *stat)

is called when you need to make sure the transfer was successful.

MPI_Waitall(n_reqs, *array_reqs, MPI_STATUS_IGNORE)

Can be used if multiple messages was sent, and you want to wait for all at the same time

77
Q

Why is MPI_Isend useful for performance?

A

You can overlap computetion and communication.

Communication is expensive, but this allows you to do useful work in the meantime.

78
Q

What are the modes of non-blocking send?

A

MPI_Isend
MPI_Issend
MPI_Ibsend
MPI_Irsend
MPI_Irecv

79
Q

What is persistent communication, and how can it be implemented?

A

If the same communication pattern is going to be used over and over, MPI can prepare these in advance and you can activate them later.

This is the case for ISend

int MPI_Send_init(<usual>, MPI_Request *req)</usual>

int MPI_Recv_init(<usual>, MPI_Request *req)</usual>

Triggered:
MPI_Start(MPI_request *req)

80
Q

What does double MPI_Wtime(void) do?

A

Returns a number of seconds representen as a double-precision float value

81
Q

How can MPI_Wtime() be used to measure execution time?

A

MPI_Barrier();
double start = MPI_Wtime();
/… work …/
double end = MPI_Wtime();

Elapsed time = end - start (on this rank)

82
Q

What is bandwidth?

A

Bytes / seconds

83
Q

What is inverse bandwidth?

A

seconds / byte

How much transfer time is added for sending additional bytes

84
Q

What are vector types?

A

Types with regular layout

Vector types consist of:
- count
- a block length
- a common stride between the blocks

Stride: Distance between neighbours

85
Q

How can vector types be created?

A

MPI_Type_vector(n_elements, blocklength, stride, type, &new_type)

86
Q

How can types of internal regions of arrays be constructed?

A

MPI_Type_create_subarray(
int ndims,
const int array_of_sizes,
const int array_of_subsizes,
const int array_of_Starts,
int order,
MPI_Datatype old,
MPI_Datatype new
)

ndims: dimensions in array

array_of_sizes: how big is entire array

array_of_subsizes: How big is our slice of the array

array_of_starts: where is the origin of the slice

87
Q

Example: create 4x4 subarray from themiddle of an 6x6 array

A

MPI_Type_create_subarray(
int 2,
{6, 6},
{4, 4},
{1,1},
int order, ?
MPI_Datatype old, ?
MPI_Datatype new ?
)

88
Q

How can you create a type from a contiguous part of memory?

A

MPI_Type_contiguous(
count, oldtype, newtype
)

89
Q

What does MPI_Type_indexed do?

A

Like MPI_Type struct, exceptthat all struct members have the same type

90
Q

What is MPI_Group?

A

An arbitrary set of ranks

91
Q

How can we create a group from all ranks in a communicator?

A

MPI_Comm_group(MPI_Comm comm, MPI_Group *group)

92
Q

What does MPI_Group_incl do?

A

Create a subgroup from a group.
Include n_members of the ranks in the rank_list

MPI_Group_incl(
MPI_Group old,
int n_members
const int rank_list[],
&new_group
)

93
Q

What does MPI_Group_excl do?

A

MPI_Group_excl(
MPI_Group old,
int n_to_remove
const int rank_list[],
&new_group
)

removes ranks from a group

94
Q

What set operations can be done on groups?

A

MPI_Group_union
MPI_Group_intersection

95
Q

Why are groups useful?

A

They can be made into communicators

MPI_Comm_create(
MPI_Comm old_comm,
MPI_Group g,
MPI_Comm *new_comm
)

When a rank within the group calls this, the communicator handle is returned

A rank outside the group gets returned MPI_COMM_NULL

96
Q

What does MPI_Graph_create do?

A

Create a graph communicator out of another comm

MPI_Graph_create(
old_comm,
int n_node,
int indexes[],
int edges[],
int reorder,
&new_comm
)

reorder: Can MPI give new ranks in the new comm

indexes: Used to map ranks to nodes, gives the start of each ranks neighbour list by its entry in the index list

edges: Sorted list of neighbour ranks of each rank

97
Q

How is cartesian communicators created?

A

MPI_Cart_create(
old_comm,
n_dims,
period[], (wrap the edges?)
reorder, (new rank in comm)
&new_comm
)

98
Q

How is the dim-array for cartesian communicators created?

A

In cartesian comms we want the dimensions to be as close to the square (2D), or cube (3D), and so on

MPI_Dims_create(
n_nodes, (rank count)
n_dims,
int dims[], (result array)
)

99
Q

How does a rank find its position within a cartesian grid?

A

MPI_Cart_coords(
cart_comm,
rank, (current rank)
dims, (dims in comm)
coords[] (result array)
)

100
Q

How are coords structured within cartesian grid?

A

{y, x}

y: elements starting at top, and going down

x: starting left, going right

101
Q

How does a rank in a cartesian grid find its neighbours?

A

MPI_Cart_shift(
comm,
dir, (axis to shift)
displacements, (how far to shift)
*rank_src,
*rank_dest
)

Rank_src: when shifted, what rank is now in my place

Rank_dest: when shifted, in what place am I

102
Q

What is MPI_PROC_NULL?

A

on non-periodic comms, if ranks have neighbours off-grid, these will return ass MPI_PROC_NULL

When this is fed to a comm call, where it would expect a rank, the matching operation would not be carried out

103
Q

How does MPI_IO work?

A

All ranks can open file at the same time

Each rank sets a view of the file, this is the window where they can write

All ranks can write within their own views at the same time

104
Q

How are files opened and closed using MPI_IO?

A

MPI_Open(
comm,
*filename, (string)
int access_mode, (MPI flags)
MPI_Info info, (can be NULL)
MPI_File *fh (the open file handle)
)

MPI_File_close(MPI_File *fh)

105
Q

What are the MPI_IO access flags

A

MPI_MODE_CREATE: create if not exist

MPI_MODE_WRONLY
MPI_MODE_RDONLY
MPI_MODE_RDWR
MPI_MODE_APPEND: signals that will be adding data at the end

106
Q

What does MPI_File_write_at do?

A

Allows you to specify position for each data chunk to write

MPI_File_write_at(
MPI_File fh,
MPI_Offset offset (where to write in it, different on each rank)
*buf (data to write)
count,
type,
*status
)

107
Q

What does MPI_File_set_view do?

A

Restrict area a rank will write in to shape it like an MPI_Datatype

MPI_File_set_view(
*fh,
*displacement
type, (type to read/write)
MPI_Datatype file_layout (what region of file to acces)
*representation,
MPI_Info info
)