Threads Flashcards

1
Q

What is a thread/process represented by? And what is the structure of this representation?

A

a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system.

It is represented by a special data structure, called the thread control block (TCB). It contains relevant information about the thread:
- Thread characteristics: thread ID, program name
- State information: instruction counter, stack pointer, register contents
- Management data: Priority, rights, statistics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the data structures used to store and manage the TCBs?

A

Single scalars, static long array, variably long linked list, tree, inverted table, ..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How to efficiently manage multiple TCBs?

A

Form subsets of TCBs with regards to important attribute values (such as grouping threads of the same state together)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the difference between static and dynamic OSs handling threads?

A

In static OSs, all threads are known in advance and are statically defined. TCBs are declared as program variables and are used for a specific application and are all generated by a configuration program once.

In dynamic OSs, the threads are created and deleted by kernel operations (create_thread (id, initial values) which create a TCB and initialize the thread and delete_thread(id, final values) which return the final values and delete the TCB).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a thread’s address space?

A

A logical address space of a thread is the space of its valid addresses, which it can access. Relative addressing and address translation (using the MMU) allow to have an arbitrary number of logical address spaces that can be mapped to physical address space, leading to mutual protection of address spaces.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a Unix process?

A

It is an address space that contains at least one thread. So we use the term thread directly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is thread switching?

A

It means that the processor stops executing the current thread (its instruction sequence) and continues with the execution of another thread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is switching by jumping?

A

It is where we insert a jump instruction directly into the thread that jumps into another thread. This is however very inflexible and applicable only in very special cases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why is thread switching costly?

A

Because we have to memorize the continuation address in the interrupted thread, selection of the next thread. And the processor must hold on to essential parts of the interrupted thread description that must not get lost (during a register reload).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Memorizing continuation address

A

Store the address of the next instruction of the interrupted thread to be executed in the interrupted thread in a dedicated variable called ni (next instruction) of the TCB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the criteria for selecting the next thread?

A

Number of threads, order of arrival, priority of each thread (urgency).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe simple next thread selection with priority.

A

Threads are ordered with regards to their execution order or priority. Newly arriving threads are inserted into the priority sequence according to the chosen order. The threads can be organized in a priority queue with two dimensions where there are groups of equal priority.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a processor register in relation to threads?

A

Threads use arithmetic registers of the processor to store intermediate results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What happens to the register when switching thread by jump?

A

Its content will be lost (overwritten) so switch by jump can only be used when the contents of the register will no longer needed and the new thread does not expect valid register contents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a thread context?

A

It is the complete thread specific information that is stored in the processor registers (content of arithmetic registers, index registers, processor state, contents of address registers, segment tables, access control information, etc..)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What happens to the thread context when switching a thread?

A

It must be saved as part of the switching and restored when the thread is resumed. Constant data that is available in the TCB does not need to be saved.

17
Q

What is thread switching by saving the thread context referred to? What are its characteristics and how to make it efficient?

A

It is context switch and it is the most time consuming part of the thread switch. To speed it up, processor can provide several sets of registers such that a thread switch does not require storing the thread context back to memory or provide a special instructions that allowing storing the complete thread context in one instruction.

18
Q

Describe the thread switching sequence?

A

SWITCH -> save context of t_run -> select t_next -> save next instruction of t_run in t_run.ni -> jump to t_next.ni -> t_run = t_next -> load context of t_run

19
Q

What is thread control in the context of the “Procedure” switch?

A

It means that each thread gets and gives up control within the switch procedure code at exactly the same point.

20
Q

Describe the implementation of the “Procedure” switch and highlight what thread has control?

A

CT: current thread
NT: next thread
procedure switch(NT:thread)

control: CT
save context of CT

CT.sp = SP, SP = NT.sp (change stack pointer)

control: NT
load context of NT
return to NT (next instruction of NT)

21
Q

What is automatic switching? And what does it require?

A

Realistically, it is not possible nor reasonable to explicitly insert switching points into the threads. Therefore, automatic thread switching is more desirable. To that end, we require a clock (timer), a hardware device that offers: specifying a deadline (timer set) and an interrupt on timeout. This allows programs to remain unchanged as the thread switch is triggered from outside the program and can happen at any arbitrary point in time.

22
Q

What is conditioned switching?

A

There could arise a situation where a continuation of the thread processing is not possible (thread waiting for input data), instead of wasting time, processor can switch to another thread. This is called conditioned switching.

23
Q

Describe thread switching due to end of time slice where there is no detailed clock interrupt handling and no other thread switching events.

A

03_threads slides: 53,60, 66, 75

24
Q

What is a kernel stack?

A

The kernel stack is part of the kernel space. Hence, it is not directly accessible from a user process. Whenever a user process uses a syscall, the CPU mode switches to kernel mode. During the syscall, the kernel stack of the running process is used.

25
Q

Why do we need switching prevention? Describe a problem that may arise due to interleaved execution.

A

Automatic switching may occur at exactly the same time when a voluntary or other switching is taking place which can lead to unwanted behavior and errors. Therefore, during switching, we need to make sure that no additional switching is triggered. The errors may occur because kernel OPs often work on same data structures and when a kernel OP is interrupted by another kernel OP, it may lead to errors.

Interleaved execution problem: diagram at 03_threads slide 78

26
Q

What is a solution to avoid problems that arise due to interleaved execution? What is this called?

A

Define sections of the kernel as critical sections. A critical section is then safe because interleaving can be excluded, meaning that kernel operations can not be executed in an interleaved fashion but are executed until completion. Therefore, when a thread is within a critical section, no other thread is allowed to enter a critical section that is in conflict. Therefore, all possible places of conflict in OS kernels need to be identified and protected as critical sections. This is called mutual exclusion. because the kernel has large # of critical sections, we can put the whole kernel under mutual exclusion.

27
Q

How to realize kernel exclusion? Describe solutions in each of the four cases?

A

Case 1: Single processor system without interrupts -> there is no reason to leave a kernel OP before completion -> No measure required.

Case 2: Single processor system with interrupts -> critical kernel OP is bracketed by a disable interrupt and an enable interrupt

Case 3: Multiprocessor system without interrupts -> introduce a kernel lock using a an atomic test-and-set instruction. When a thread enters critical section, it enables the kernel lock so that other threads that attempt to enter their critical section check that lock and do not enter their critical section as they repeatedly check the kernel lock value (busy waiting) until the lock is freed from the initial thread that was in critical section (reset) and the awaiting thread are allowed to enter their critical section. Kernel OPs are usually short so this time wasted waiting is tolerated.

Case 4: Multiprocessor system with interrupts -> Both techniques from Case 2 and Case 3 are implemented. But here we first have to disable the interrupts and then acquire the kernel lock. Other solutions are discussed in 03_threads slides 84 and 85

28
Q

Why do we need thread states?

A

When switching from one thread to another, we do not know that the thread we are switching to is also blocked or not (awaiting for some I/O operation for example), then we have to keep switching until finding a resumable thread which is a waste of time. Therefore we group threads depending on their states.

29
Q

What the the thread states?

A
  • State running: threads that are currently executed on the processor.
  • State ready: ready to be executed but have to wait for processor to become free.
  • State waiting: blocked because they wait for some external event.

We can define the states as attributes in the TCB of each thread or as membership in a list/set of threads.

30
Q

What are the kernel OPs for thread state change operations?

A
  • Relinquish: voluntary switch to another thread. current thread changes its state from running to ready.
  • Assign: take the next thread from the ready set to resume its OPs on processor. ready -> running
  • Block: state switch from running to waiting or blocked. This happens because the current execution of thread in processor must not resume until a certain condition is met.
  • Deblock: event occurs for which a blocked/waiting thread waited for it, change its state from waiting to ready.
31
Q

How to handle thread states in a dynamic state where the set of threads if variable?

A

We distinguish between active and inactive threads, when a thread is created it is resting (inactive) until it gets activated and becomes ready. Ready threads can also get deactivated and they must become inactive before they get deleted.

32
Q

Describe preemption in the context of thread switching. What are the 4 cases where a thread gives up execution?

A

Not all threads are of equal importance/urgency. So when a thread with a higher priority than the running threads enters the ready queue, the running thread is preempted by the thread with the higher priority. This is the 4th case where a thread gives up executed. Other 3 cases are when a running thread relinquishes, forced to give up execution by clock interrupt or is blocked for some condition it is waiting for.

33
Q

How to check preemption?

A

Preemption requires that no ready thread has a higher priority than a running one unless the higher priority thread just enters the ready queue (by the operations of relinquish, deblock and activate) in which we execute a thread switch. If that is the case, we switch to the more urgent thread.

34
Q

What is an idle thread, why do we need it and what are its properties and how can we represent it?

A

Idle thread: thread that has no instructions being executed.

Why we need it? If all threads are blocked, processor has nothing to do, so the idle thread becomes running.

Its properties: must not stop, has lowest priority, must be preemptable at any time.

Representation: endless loop (energy waste), dynamic stop (special instruction with no memory access but can react to external signals, disabling parts of the processor and saving energy)