Task parallelism and manager-worker parallelism Flashcards
(9 cards)
How is manager-worker parallelism implemented?
Additional MPI features are used. These features are wildcards and Groups and Communicators.
What is an MPI Wildcard?
Allows a message to be received from an unspecified source. Used so the manager can receive a message from all workers.
How are results from workers collected?
Rather than using MPI_COMM_WORLD a new communicator is defined which only includes the worker processes. This new communicator can be used in MPI function calls.
How doe processors in the manager-worker parallelism interact?
One MPI process is a “manager” and hands out work to worker processes. All other processes are workers which request work from the manager process. Workers request more work when they have finished their current task.
Why does manager-worker scale better than domain decomposition?
manager-worker can load-balance at run time
What are the three OpenMP work sharing constructs?
- parallel for
- sections
- single
When is the section construct used?
When there are two or more sections of code which can be executed concurrently
When is the single construct used?
Usually, work that is not parallelised will be repeated by all threads. Single ensures that only one thread executes a block of code inside a parallel region.
How do threads interact when running OpenMP tasks?
One thread generates tasks, and other threads execute the tasks. After the generating threads has finished generating the tasks, it may carry out some of the tasks.