Concurrency: Multi-Tasking & Synchronization Flashcards

1
Q

What is a race condition?

A
  • Race Condition(data race)
    ○ Unprotected timing of execution.
    ○ The result of such a program would be indeterminate:
    sometimes it produces the correct result but sometimes the results are different and are likely wrong.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a critical section?

A

○ Critical Section
Code routine that accesses a shared resource/data, if unprotected, might yield incorrect result. Might lead to race condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is mutual exclusion? Why does it avoid race condition

A
  • Mutual Exclusion(mutex)
    ○ Atomicity is guaranteed under mutex
    ○ Lock(Synchronization primitive)
    § Acquire:
    □ When a thread needs to use a shared resource(critical section), it must first get control (a lock) to enter the protected code area, ensuring no other thread can use it at the same time. If multiple threads try to get the lock, only one can get it and the rest must wait.
    § Release:
    After the thread has finished using the shared resource, it must unlock or release the control so that other threads can use the resource.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How can we get mutual exclusion?

A

Atomicity of the critical section

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How is atomicity achieved?

A

Locks- Condition variables- Channels etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are some of the problems with atomicity/mutual exclusion?

A

Where one thread must wait for another to complete some action before it continues. This interaction arises, for example, when a process performs a disk I/O and is put to sleep; when the I/O completes, the process needs to be roused from its slumber so it can continue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How does the lock mechanism work?

A

We can use a special lock variable to protect data- All threads accessing a critical section share a lock- One threads succeeds in locking - owner of lock- Other threads that try to lock cannot proceed further until lock is released by the owner

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the goals of locks?

A

Fairness: Every thread gets a chance to acquire the lock, preventing indefinite waiting.

Low Overhead: Acquiring, releasing, and waiting for the lock uses minimal resources for efficiency.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are some of the different types of locks?

A

Spin lock (thread ligge å venta i en for loop(spinne til den fe tilgang til locken)- Sleep lock(Thread sleeps until lock is free. Sleeping results in more idle time)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a spin lock? What is a sleep lock?

A

Spins until lock is acquired| - While loops

Instead of spinning for a lock, a contending thread could simply give up the CPU and check back later- Yield() moves thread from running to ready state
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When should we use locks?

A

A lock should be acquire before accessing any variabel or data structure that is shared between multiple threads of a process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the difference between a fined and coarse-grained lock?

A

One big lock for all shared data(Coarse grained locks) vs separate locks(fined locks)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the positives and negatives of fine-grained locks?

A

Fine grained allows more parallelism| - Multiple fine-grained locks may be harder to manage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the advantages and disadvantages of coarse-grained locks?

A

Slower| - Easier to manage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why do we want to use sleeping locks?

A

CPU Waste: When threads continuously check for a spinlock’s availability (known as spinning), they use up CPU resources without doing any productive work.
Extended Blocking: The problem worsens if a thread holding a spinlock is blocked for a long time, leading to even more CPU waste by the threads waiting for the spinlock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the advantages and disadvantages of spin locks?

A

Provides mutual exclusion- spin locks don’t provide any fairness guarantees- Spin locks, in the single CPU case, performance overheads can be quite painful

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How does Compare-and-swap work for locking?

A

it simply checks if the flag is 0 and if so, atomically swaps in a 1 thus acquiring the lock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How does Load-Linked and Store-Conditional for locking work?

A

Load-Linked: This works like a regular load instruction. It just takes a value from memory and puts it in a register.
Store-Conditional: This is where it gets special. It will update the memory only if no other updates to the same memory address have happened since the load-linked. If there’s been an update in between, the store-conditional won’t go through.

19
Q

How does Fetch-And-Add for locking work?

A

Fetch-and-add instruction atomically increments a value while returning the old value at a particular address.- Unlock is accomplished simply by incrementing the turn such that the next waiting thread (if there is one) can now enter the critical section

20
Q

Why are condition variables used?

A

Condition variables are used to make threads wait for certain conditions to become true before they continue execution, ensuring proper sequencing and coordination in concurrent operations.

21
Q

How do condition variables work?

A

Condition variables allow threads to wait in a queue for a certain condition. When the condition is met, another thread signals the condition variable to wake up one or all waiting threads.

22
Q

How do we check a condition using condition variables?

A

In a while loop| We do this to avoid corner cases of thread being woken up even when condition not true

23
Q

What is a semaphore?

A

Semaphore is a variable with an underlying counter - Synchronization primitive like condition variable- Acts like a lock- Shared between threads

24
Q

How does a semaphore work?

A

A semaphore with a value of 1 acts like a mutex lock. Here’s how it works:

Thread 1 decreases the counter to enter the critical section, locking it.

After finishing its task, Thread 1 increases the counter, unlocking it.
Now, Thread 2 can enter and do the same.
25
Q

Why are semaphores special?

A

Can be used to set order of execution between threads like Condition Variables

26
Q

What are some of the concurrency bugs?

A

We have two:- Deadlock bugs (trying to share access same resource, but stops to function)- Non deadlock bugs

27
Q

What is a deadlock bug? What is a non-deadlock bug?

A

A deadlock bug is that threads cannot execute any further and wait for each other

When threads execute without proper synchronization, it can lead to atomicity bugs. This means they might complete their tasks in an overlapping manner, resulting in incorrect or unexpected results, even though no deadlock occurs.

28
Q

How can we fix a non-deadlock bug?

A

To fix a non-deadlock bug, ensure proper synchronization of threads, use locks or atomic operations to manage shared resources, and correctly sequence thread execution.

29
Q

How can a deadlock occur?

A

Mutual Exclusion: Only one thread can use a resource at a time.
Hold-and-Wait: A thread holds a resource and waits for another.
No Preemption: Resources can’t be taken back forcibly from a thread.
Circular Wait: Threads form a loop, each waiting for a resource held by the next in line.

30
Q

How can we prevent circular wait (deadlock bug)?

A

Aquire locks in a certain fixed order| - Total ordering must be followed

31
Q

How can we prevent hold-and-wait (deadlock bug)?

A

To avoid deadlocks:

Processes must request all resources at once.
Processes with resources must release them before asking for new ones, then re-request all resources together.

However, this can lower concurrency and performance.

32
Q

How can we generally avoid deadlock bugs and what to do if one occurs?

A

Deadlock avoidance: if OS know which process needs which lock, it can schedule the processes so that deadlock does not occur.- Detect and recover: reboot system or kill deadlock process.

33
Q

How do we calculate average response time?

A

Steps:- Calculate each process response time- Add them all together and divid it by the number of processes.

34
Q

Explain the term temporal locality?

A

Temporal locality: Programs often re-access the same instructions or data shortly after their initial use, like in loops or frequently used data structures. Caching these helps improve performance.

35
Q

Explain the term spatial locality?

A

Spatial locality: Programs typically access data close to recently used data. For example, sequential instructions or nearby data fields are often used together. Caches leverage this by loading blocks of data, not just single locations, for efficiency.

36
Q

Belady’s anomaly occurs when?

A

The number of page faults increase despite adding more page frames.Will ONLY happen with FIFO page scheduling algorithm.Will NOT happen with LRU, LFU or Optimal page scheduling algorithms.

37
Q

What is a Thread Control Block (TCB)?

A

Answer: TCB is a data structure in operating systems that contains thread-specific information like scheduling properties and time slice lengths.

38
Q

How does a thread context switch differ from a process context switch?

A

Answer: A thread context switch is more lightweight than a process context switch because the address space remains the same, eliminating the need to switch to a different Page Table.

39
Q

What is the key difference between a Process Control Block (PCB) and a Thread Control Block (TCB)?

A

Answer: While PCB contains overall process information, TCB contains specific details for each individual thread within a multi-threaded process.

40
Q

Why are thread context switches considered more efficient?

A

Answer: Thread context switches are more efficient because they don’t require changing the address space, unlike process context switches.

41
Q

What kind of information is stored in a TCB?

A

Answer: A TCB stores thread-specific information such as the thread’s Program Counter (PC), register values, scheduling properties, and time slice lengths.

42
Q

Does a multi-threaded process have both a PCB and TCBs?

A

Answer: Yes, a multi-threaded process has a PCB for overall process information and separate TCBs for each thread’s specific details.

43
Q

Question: In POSIX thread APIs, what are the functions and purposes of pthread_create(), pthread_join(), pthread_exit(), and synchronization primitives like pthread_mutex_() and pthread_cond_()?

A

pthread_create(): Initiates a new thread with designated attributes, beginning its execution at a specific routine. The thread that calls this function becomes the parent of the newly created child thread.
pthread_join(): Used to wait for a specified thread to complete its execution, effectively ‘joining’ it back to the main flow.
pthread_exit(): Allows a thread to end its own execution. A thread function’s return implicitly triggers pthread_exit().
Synchronization Primitives (pthread_mutex_(), pthread_cond_()): Tools in pthreads for managing safe access to shared resources. Mutexes (pthread_mutex_()) ensure one-thread-at-a-time access to a resource, while condition variables (pthread_cond_()) facilitate waiting for and signaling specific conditions among threads.