Parallel and Distributed Computing Flashcards

1
Q

Distributed Computing System example

A

the cs lab, supercomputers, email, file servers, printer access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Parallel Computing system example

A

Stampede here on campus

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Distributed System

A

A set of physically separate processors connected by one or more communication links

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Parallel Computing

A

-Tightly-coupled systems-processors share clock, memory, and run one OS-frequent communication-processors connected via a network (typically a bus)-network is connected to single shared memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Distributed Computing

A

-loosely-coupled systems-Each processor has its own memory-each processor runs an independent OS-communication is very expensive-collection of computers (nodes) connected by a network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Parallel computing communicates through:

A

shared memory (typically)-read and write accesses to shared memory locations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Two forms of parallel computing architecture:

A

-SMP (Symmetric Multiprocessor)-Multicore-can also combine the two (ex. Stampede)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Distributed computing communicates through:

A

message passing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The architecture of distributed computing

A

-Nodes are connected by a network-Massively Parallel Machines

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

SMP (Symmetric Multiprocessor)

A

-Multiprocessor: two or more processors have a common RAM-Symmetric refers to the OS (one OS for all processors and any processor can run it)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Multicore:

A

multiprocessors on the same chip

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Massively Parallel Machines

A

-nodes are greatly simplified and include only memory, processor(s), network card-augmented with fast network interface

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Clusters

A

-Networks workstations with a fast-built of Common off-the-shelf (COTS) parts-less specialized

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Very Distributed Computing

A

-grid computing-cloud computing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Parallel programming involves:

A

-decomposing an algorithm into parts-Distributing the parts as tasks which re worked on by multiple processors simultaneously-Coordinating work and communication of those processors (synchronization)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Parallel programming considerations:

A

-type of parallel architecture being used-type of processor communications used

17
Q

Shared memory Programming Model

A

-interprocess communication is implicit-synchronization is explicit-assume processes/threads can read & write a set of shared memory locations-difficult to provide across machine boundaries-programs/threads communicate/cooperate via loads/stores to memory locations they share-communication is therefore at memory access speed (very fast) and is implicit-cooperating pieces must all execute on the same system (computer)-OS services and/or libraries used for creating tasks (processes/threads) and coordination (semaphores/barriers/locks)

18
Q

message passing Programming Model

A

“-interprocess communication is explicit-Synchronization is implicit-extensible to communication in distributed systems-““shared”” data is communicated using send/receive services (across an external network)-shared data must be formatted into message chunks for distribution-coordination is also via sending/receiving messages-program components can be run on the same or different systems, so can use many of processes-standard libraries exist to encapsulate messages”

19
Q

when do send/receive operations terminate?

A

“-Blocking (synchronous): sender waits until its message is received; receiver waits if no message is available-Non-blocking (asynchronous): send operation ““immediately”” returns; receive operation returns if message is available or not (polling)-partially blocking/non-blocking: send/receive with timeout”

20
Q

Limitations of message passing

A

-easy for OS, hard for programmer-programmer must code communication-programmer may have to code format conversions, flow control, error control-no dynamic resource discovery

21
Q

Event Ordering

A

-Coordination of requests (especially in a fair way) requires events (requests) to be ordered-stand alone systems: shared clock/memory-distributed systems: no global clock; each clock runs at different speeds (clock drift)

22
Q

Event ordering for distributed systems

A

“-through message passing-a message must be sent before it can be received-send/receives can thus ““synchronize”” the clocks”

23
Q

Happened-Before Relation

A
  1. If A and B are events in the same process, and A executed before B, then A->B2. If A is a message send and B is when the message is received, then A->B3. A->B, and B->C, then A->C (transitivity)
24
Q

Atomicity

A

either something happens or it doesn’t- don’t want partial things to happen

25
Q

The Generals’ Paradox

A

Problem:-Two generals are on two separate mountains-can communicate only via messengers; but messengers can get lost or captured by enemy-goal is to coordinate their attack

26
Q

Distributed Consensus with Link Failures

A

Problem:-Take any exchange of messages that solves the general’s- Take the last message mn . Since mn might be lost, but the algorithm still succeeds, it must not be necessary.- Repeat until no messages are exchanged.- No messages exchanged can’t be a solution, so our assumption that we have an algorithm to solve the problem must be wrong.

27
Q

Distributed Transactions

A

-two machines agree to do something, or not do it, atomically-two-phase commit protocol allows coordination under reasonable operating conditions-uses log on each machine to track whether commit happened

28
Q

Two-phase Commit Protocol: phase 1

A

Phase 1: Coordinator requests a transaction-Coordinator sends a request to all participants for that transaction-on receiving request, participants perform these actions: execute the transaction; write VOTE_COMMIT or VOTE_ABORT to local logs; send VOTE_COMMIT or VOTE_ABORT to coordinator

29
Q

Two-Phase Commit Protocol: Phase 2

A

Phase 2: Coordinator commits or aborts the transaction -coordinator decides: -Case 1: coordinator receives VOTE_ABORT or times out -Case 2: Coordinator receives VOTE_COMMIT from all participants

30
Q

Bully Algorithm

A

“• Assumptions:-Processes are numbered (otherwise impossible).-Using process numbers does not cause unfairness.•Algorithm concept:-If leader is not heard from in a while, assume s/he crashed.-Leader will be remaining process with highest rank.-Processes who think they are leader-worthy will broadcast that information.-During this ““election campaign”” processes who are near the top see if the process trying to grab power crashes (as evidenced by lack of message in timeout interval).-At end of time interval, if alpha-process has not heard from rivals, assumes s/he has won.-If former alpha-process arises from dead, s/he bullies their way to the top. (Invariant: highest # process rules)”