Definitioner Flashcards

1
Q

What is stored in the Process Control Block?

A

Process scheduling state, structuring information, privileges, state, pid, program counter, CPU registers, information of page table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is PCB role?

A

Used and modified by OS utilties (scheduling, memory, I/O resource access). The set of PCBs defines the current state of the operating system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the process control block?

A

Is a data structure in the operating system kernel contraining information needed to manage the scheduling of a particular process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Where is PCB stored?

A

In some OS it is stored in the beginning of the kernel stack of the process. (within the OS kernel).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is context switch?

A

Is the process of storing the state of a process, so that it can be restored and execution resumed from the same point later. It is essential to virtualization of CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What information is stored and recieved when performing a context swithc between threads?

A

No TLB flush is necessary, they share same adress space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What happens during a context switch?

A
The processimage (registers of the process, PC) is stored in PCB. A handle to the PCB is added to a queue of processes that are ready to run.
It restored the PCB of next process scheduled to run. PC from that PCV is loaded and execution can continue.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What makes context switch costly?

A

TLB flushes,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Thread control blocks

A

No need to switch page tables (same address space).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Thread-local storage

A

The stack of the relevant thread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Parallelism

A

Do work on multiple CPU using threading, one thread per CPU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Race condition

A

Arises if multiple threads of execution enter the critical section at roughly the same time; both attempt to update the shared data strucutre, Results is nondeterministic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Critical section

A

Code that can result in race condition. Piece of code that accesses a shared resource, usually a variable or data structure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Mutual exclusion

A

Guarantees one thread at a time executing within critical section.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Atomically

A

As a unit, all or none.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Condition variable

A

To wait for a condition to become true, is an explicit queue that threads can put themself on when conditions are not met, another thread can wake that thread by signaling that condition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Semaphore

A

A semaphore used as a lock is called a binary semaphore. Is a powerful and flexible primitive for writing concurrent programs. Simple and utility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Spin-waiting

A

Endlessly checks the value of flag

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Preemptive scheduler

A

One that will interupt a thread via a timer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Yield:

A

A system call that moves the caller from running state to ready state (process deschedules itself)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Priority inversion:

A

A thread of higher priority tries to acquire an lock that is already lock, it will do it indefiently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Two-phase lock:

A

Uses a spinlock, if not lock is acquired during first spin phase, a sacond phase is entered, where the caller is put to sleep, and only woken up when the lock becomes free later.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Compare-and-swap

A

Test whether the value at the address is equal to expeted, then update memory location with the new value, if not do nothing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Overlap

A

Avoid blocking program due to slow I/O.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Advantages with threads

A

Threads better then processes when sharing data between eachother.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Perfect scaling

A

Even though more work is done, it is done i parallel, and hence the time taken to complete the task is not increased.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Sloppy counter

A

One local counter per CPU core and one global counter, local values are periodically transferred to the global counter. If threashold is low scalabiity is low but global value more accurate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Big kernel lock

A

One big lock, not good on multi-CPU systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

True or false

A

Concurrent counters not scalable using locks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Reader-Writer lock

A

When acquiring a read lock, reader first acquires lock and then increments the readers var to track how many readers are currently inside the data structure. The reader also acquires the write lock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

How to combat deadlock

A

reak the symmetry, let one of that philisopher grab the right before the left. (dijkarsta solution)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Atomicity violation

A

The desired serializability among multiple memory accesses is violated. (locks around shared-variable references can solve it)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Order violation

A

The desired order between two memory accesses is flipped. (enforce ordering using condition variables)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Encapsulation:

A

Hiding code, using abstraction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Conditions for deadlock

A

hold-and-wait, no preemption, circular wait

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Hold-and-wait:

A

Threads hold resources allocated to them while waiting for additional resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

No preemption

A

Resources (locks) cannot be forcibly removed from threads that are holding them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Circular wait

A

here existst a circular chain of threads such that each thread hold one or more resources that are being requested by next thread in chain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Total ordering:

A

Strict ordering of which locks to acquire prevents circular wait.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

How to prevent hold and wait

A

Hold and wait can be prevented by having a lock for the locks we are acquiring. It is likely to decrease concurrency becuase we are not acquiring the locks when truly needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Livelock:

A

Two threads could both be repeatedly attempting this sequence of locking, trylocking, unlocking..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

How to avoid deadlocks

A

Avoid deadlocks by avoid the need of mutex. (By using powerful hardware instructions to build data structures which does not require expkicit locking).
Deadlock avoidance: Schedules threads to guarantee no deadlock can occur. Can reduce perfomance as time for total completion can increase.
Detect and recover: IF deadlocks are rare this strat is ok. It finds deadlock and restart.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Event-based concurrency

A

Wait for an event to occur, check for type, and do the small amount of work it requires (I/O req, scheduling other events)..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Event handler

A

The code that processes each event. Blocking when thread does systemcall. Can be solved with asynchronous I/O and signal handlers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Signal handler

A

Some code in the application to handle an incomming signal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Manual stack management:

A

In a thread-based program the state the program needs is on the stack of the thread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Event-based problems

A

If an event-handler page faults it will block. On multiple cores when need an eventhandler on each CPU and locks over the critical sections. Hard to implement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Memory bus (proprietary)

A

CPU & memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

General I/O bus

A

PCI, connecting Graphics card

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Peripheral bus

A

SCSI, SATA, USB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Status register:

A

Can be read to see the current status of the device.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Command register:

A

To tell the device to perform a certain task.

53
Q

Data register:

A

To pass data to/from the device. OS controlls the device behaviour through these registers.

54
Q

Programmed I/O

A

When the main CPU is involved with the data movement.

55
Q

Interrupt handler

A

Operating system code that will finish the request. We utilizze interrupts for overlap. The OS can do something else while waiting for disk.

56
Q

Two-phased hybrid approach

A

Using both polling and interrupt, good if we dont know the speed of the device.

57
Q

Coalescing:

A

A device which needs to raise an interrupt first waits for a bit before delivering the interrupt to the CPU.

58
Q

Deriect Memory Access (DMA)

A

DMA enginge is a very specific device within a system that can orchestrate transfers between devices and main memory without much CPU intervention.

OS programs the DMS engine, DMA controllers raises an interrupt and the OS knows transfer is completed.

59
Q

I/O instructions:

A

These instructions specify a way for the OS to send data to specific device registers. These instructions are usually privileged.

60
Q

Memorymapped I/O:

A

The hardware makes device registers available as if they were memory locations. Access registers with load (to read), store (to write) the address.

61
Q

Device driver:

A

A piece of software in the OS knowing the detail of how an device works.

62
Q

IDE interface

A

Wait for drive to be ready, write parameters to command registers. Start the I/O. Data transfer, handle interrupts, error handling.

63
Q

Hard disk drive

A

Drive consists of large number of sectors (512-byte blocks). Each can be writted or read. Disk array of sectors. 0-n-1 is address space of drive.

Single 512-byte write can be atomic.

64
Q

Torn write

A

Only a portion of a larger write may complete.

65
Q

Disk scheduling:

A

Know how long each request will take, and will try follow principle SJF (shortest job first). By estimating seek and possible rot delay of a request.

66
Q

Disk head

A

Reads and writes to disk.

67
Q

Rotational delay

A

Time it takes for the desired sector to rotate under the disk head.

68
Q

Disk arm:

A

Moves diskhead acress the surface to position head over desired track.

69
Q

Seek:

A

The arm moves head to desired track.

70
Q

Transfer:

A

FInal phase of I/O, data is either read from or written to the surface.

71
Q

How is the information retrieved and taken from disk?

A

Seek -> Wait for rotational delay -> transfer

72
Q

Multi-zoned disk drives

A

Disk is organized into multiple zones.

73
Q

Cache (track buffer)

A

So it can stored read memory and read multiple sectors on that track to save time.

74
Q

Write back:

A

Acknowledge the write completed when it has put the data in its memory.

75
Q

Write through:

A

Acknowledge the write completed after write actually been written to disk. Can be dangerous if application require data to be written in a certain order for correctness.

76
Q

Random workload:

A

Issues small reads to random locations on the disk.

77
Q

Sequential workload:

A

Reads a large number of sectors consecutively from the disk without jumping around.

78
Q

SSTF,

A

Shortest Seek Time First: Orders the queue of I/O request by track picking nearest track. This is abstracted to the OS, so it can instead implment nearest block first (NBF). Can lead to starvation.

79
Q

SCAN:

A

Moves back and forth acress the disk servicing request in order across the tracks a sweep.

80
Q

F-SCAN:

A

Freezes the queue to be serviced when it is doing a sweep. SCAN is called elevator algorithm.

81
Q

I/O merging:

A

Is important at the OS level, reduce the number of requests sent to disk, (merge the requests for blocks 33 and 34) into a single two-block request.

82
Q

Work-conserving

A

Send I/O request, even a single one asap to the drive.

83
Q

Anticipatory disk scheduling:

A

By waiting new and “better” request may arrive at the disk and overall efficiency is increased.

84
Q

Redundant Array of Inexpensive Disks (RAID):

A

A technique to use multiple disks in concert to build a faster bigger more reliable disk system. Transparency improves deplayability of RAID. RAIDs consists of a microcontroller that runs firmware to direct the operation of the RAID, volatile memory such as DRAM to buffer data blocks as they are read and written.
At a high level, a RAID is like a specialized computer system (processor, memory, disks).

85
Q

Fail-stop fault model:

A

Disk can either be in state working or failed. A working disk, all blocks can be read or written.

86
Q

Three objectives of RAID

A

Capacity, reliability, performance

87
Q

RAID Level 0 (striping):

A

Spread the blocks of array across the disk in a round-robin fashion.

88
Q

File:

A

A linear array of bytes. Has a low-level naem which user is not aware of. Name = inode number

89
Q

Directory:

A

Has a low-level name and contains a list of entries (user-readable name, low-level name). Filename makes system easier to use and more modular.

90
Q

File descriptor:

A

Is an integer, private per process and is used to access files. (Pointer to an object of type file).

91
Q

fsync()

A

The file system responds by forcing all dirty (not yet written) data to disk, returns when all writes a complete. Often forgotten to fsync the directory file is within.

92
Q

Metadata:

A

Data about files.

93
Q

Inode

A

Containing metadata (persistent data structure kept by the file system that has information like metadata in it).

Inode can refers to dta blocks with direct pointers (disk addresses) inside the inode. Can limit the size of the file becuase inodes are limited in size and cant have to large amount of pointers.

94
Q

Remove:

A

unliks the file.

95
Q

Mkdir:

A

A directory is created, has two entries (one entry refer to itself, and the other one refers to its parent). Me = . parent = ..

96
Q

link:

A

Create another name in the directory and refers it to the same inode number of the original file. You have two human names that refer to the same file.

97
Q

Link count/reference count:

A

Allows file system to track how many links are to that inode. If reference count reaches zero file system free the inode and related data blocks (and truly delete file).

98
Q

What you cant hard link

A

You cant create a hard link to a directory (create a cycle in the directory tree); you can’t hard link to files in other disk partitions.

99
Q

RAID Level 1 (mirroring):

A

consists of an exact copy (or mirror) of a set of data on two or more disks; This layout is useful when read performance or reliability is more important than write performance or the resulting data storage capacity.[13][14]

100
Q

RAID Levels 4/5 (parity-based redundancy).

A

consists of block-level striping with a dedicated/distrubited parity disk. Double writing load. Parity deive used to provide fault tolerance.

101
Q

Partitionering

A

Is the creation of one or more regions on a hard disk, so that an operating system can manage information in each region seperatly. (making the storage device visisble to an operarting system). Writing information into blocks of a storage device.

102
Q

Directory:

A

Has a low-level name and contains a list of entries (user-readable name, low-level name). Filename makes system easier to use and more modular. Specific type of file that store name -> inode-number mappings.

103
Q

Mounting

A

akes an existing directory as a target mount point and essentially pasta a new file system onto the directory tree at that point. Unifies all file systems into one tree.

104
Q

Filesystem

A

Is a way that an operating system organizes files on a disk.

105
Q

Data region

A

Region of the disk we use for user data.

106
Q

Inode table:

A

Holds an array of on-disk inodes.

107
Q

Allocation structures:

A

Are data blocks free or allocated. Free list (points to first free block) or bitmap

108
Q

Bitmap:

A

One bitmap for data region, and one for the inode table (inode bitmap).

109
Q

Superblock:

A

Contains information about this particular file system (how many inodes and data blocks are in the file system, where inode table begins, and magic number

110
Q

Allocation structures:

A

Track which inodes or data blocks are free or allocated.

111
Q

Indirect pointer

A

Points to a block which contains more pointers, each point to user data.

112
Q

Multi-level index approach:

A

Indirect pointers to indirect pointers.

113
Q

Fat allocation table

A

Uses linked list as a data structure to point to data blocks. Doesnt have inodes but directory entries to store metadata. Hard link is not possible.

114
Q

Free space management:

A

Track which inodes and data blocks are free and which are not so file system can find space for new allocated file or directory.

115
Q

File system cache:

A

Becuase the number of I/O each open, write and read system call invokes

116
Q

Dynamic partitioning:

A

Many modern operating systems integrate virtual memory pages and file system pages into a unified page cache.

117
Q

Write buffering

A

Delays writes, the file system kan batch some updates into a smaller set of I/O. Buffering number of writes to memory.

118
Q

Static buffering:

A

Easier to implement, more predictable perfomance.

119
Q

Dynamic buffering:

A

Achieve better utilization, but can be more complex to implement.

120
Q

File system purpose

A

Tries to give an abstraction of data with files and directories.

121
Q

Amortization:

A

The process of reducing an overhead by doing more work per overhead.

122
Q

Sub-blocks

A

A bigger block divided into smaller pieces that store data until it grows to size of a block and is transfered to that block and then is freed for future use.

123
Q

Parameterization:

A

FFS would figure out the specific performance parameters of the disk and use those to decide on the exact staggered layout scheme.

124
Q

fsck (file system checker):

A

Checking after reboot for filsystems inconsitancy, space leaks and such, fixs them. Cant find fault when inode points to garbage. Runs before the file system is mounted and made available

125
Q

Fsck does

A

Check superblock looks reasonable, compares file system size is greater than the blocks that have been allocated.

126
Q

Free blocks

A

Next fsck scans the inods, indirect blocks.. to build an understanding of which block are currently allocated within the file system. Trust the inodes and correct bitmaps

127
Q

Circular log:

A

Journaling file systems treats log as a circular data structure, re-using it over and over.

128
Q

Journaling (write-ahead logging):

A

techniques for providing atomicity and durability (two of the ACID properties) in database systems. In a system using WAL, all modifications are written to a log before they are applied. Usually both redo and undo information is stored in the log.

129
Q

Crash scenarios:

A

Incosistency in file system data structures, space leaks, return garbage data to a user.