Threads, Synchronizations, Locks Flashcards

1
Q

Mutual exclusion

A

when one thread accesses a shared resource, no other threads can access it. It is important that any thread trying to access the shared resource eventually gets access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Synchronization

A

methods that ensure operation of things does not disrupt dependencies. For example, semaphores, locks, and many more!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Critical sections

A

parts of the program that access shared resources. It is important that no more than one thread executes a critical section at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

race condition

A

when the order at which threads run a piece of code matters and it is not done in the right order leading to wrong program behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Progress

A

It is important that in solving the critical section problem, a thread outside the critical section does not block other threads from entering the critical section.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Semaphores

A

Uses wait and signal to manage the threads that can access a shared resource. A thread either waits for an available space or when its done it signal that a space is available.

Semaphores can be binary (0/1) or counting (1-n) spaces

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Thread_join

A

means all threads above waits right before the join command, when all threads have reached, they are joined into one thread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Condition variables

A

Similar to binary semaphores but instead of 0 and 1 representing if a space is occupied or not. It signifies a boolean condition where an event is completed or not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Spinlocks

A

If lock is acquired, wait in a loop and repeatingly check if the lock is free until the thread has acquired it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Queuing locks

A

Synchronisation method that is more advanced version of spin lock. Uses a linked list and serves threads in FIFO order (same as ticket locks). Each location is used by a thread to spin. Once a thread has freed the lock, the next element in the queue will exit spinning.

Lock points to the node that has the lock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Ticket locks

A

A synchronisation method. It works exactly like a ticketing counter for a clinic. Every thread is served in the order it arrived/requested the resource. Every thread that arrives is given a position. Once a thread is served, the ‘Now Serving’ variable is increment atomically. This holds the position of the thread that will be served next. This is dequeued from the queue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

MCS locks

A

Similar to queuing lock but with a tail pointer pointing to the last node in the list. Every new thread is added to the back of the list using CAS (compare and swap).

New thread comes, update tail node. Compare tail node with old. New node points to previous last node.

When a thread wants to release a lock, the next field is set and the node is removed from the list.

MCS is complete when tail pointer points to NULL.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Atomic instructions

A

a hardware feature in which an instruction is executed completely or not at all.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

CAS

A

compare and swap. Compare a value with the old value. If the value is old, swap it for the new one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

TAS

A

test and set. Returns the old value. When the old value is false, then we can get out of the spin stage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How is Queuing locks better than regular spin locks?

A

reduces cache coherence

16
Q

Cache coherence

A

when caches across processes need to be aligned in order to read and write the same data. When there one process writes to cache, the data in other processes cache is invalidated to trigger a cache miss when it is accessed.