Operating Systems Flashcards

(91 cards)

1
Q

define operating system

A

a program that acts as an intermediary between a user and hardware

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

OS goals

A
  • execute user programs
  • makes solving user problems easier
  • makes computer system convenient to use
  • use system resources efficiently and fairly (priority and scheduling)
  • manage processes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

OS is a …. (roles and responsibilities)

A
  • resource allocator: responsible for management of all computer resources
  • kernel: the one program that runs the whole time
  • control program: controls execution of user programs and operation of I/O devices
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is a process

A

a unit of execution that uses associated resources to execute collection of instructions completing a reasonable task

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

OS process management roles

A
  • creating and deleting processes
  • holding and resuming processes
  • mechanisms for process synchronisation (priority, scheduling)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

process includes

A
  • code: text section
  • heap: memory allocated during program execution
  • data stack: temporary data e.g. local variables
  • data section: global variables
  • current activity: through PC and CPU register contents
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

information of process stored as:

A

a process control block. holds info on:

  • unique identifier: PID (process ID)
  • state
  • CPU utilisation
  • CPU scheduling info e.g. priority
  • memory usage
  • other info e.g. owner, description
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

process state (~status)

A
  • new: process is being created
  • running: instructions are being executed
  • waiting: process is waiting for some event to occur
  • ready: process is ready to be dispatched to CPU
  • terminated: process has completed execution or some other process is causing termination
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

process state cycle

A

new –(admitted)–> ready –(scheduler dispatch)–>running
running–(interrupt)–> ready
running –(I/O or event wait)–> waiting – (event completion or I/O)–> ready
running – (exit) –> terminated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

process creation

A

a new process, as a parent process, can create a number of child processes which can create processes themselves, forming a tree of processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

process creation: sharing resources

A

parent and child processes can either:

  • share all resources
  • child shares subset of parent processes
  • share no processes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

process creation: execution

A

parent and child either:

  • execute concurrently
  • parent waits for child process to terminate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

process termination

A

process executes final instruction and requests OS to delete it:

  • data outputted from child to parent process
  • child processes resources deallocated by OS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

why would parent process terminate execution of child process?

A
  • child process exceeded its allocated resources
  • child task no longer necessary
  • parent itself is terminating (cascading termination)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

aim of kernel

A

provide an environment in which processes can exist

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

four essential components of a kernel

A
  • privileged instruction set
  • interrupt mechanism
  • clock
  • memory protection mechanisms
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

kernel consists of

A
  • FLIH (first level interrupt handler): manage interrupts
  • dispatcher: switch CPU between interrupts
  • intra OS communications via system bus
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what is an interrupt

A

a signal from either hardware or software of an event causing a change in process:

e. g. hardware: triggers an interrupt by sending a signal to the CPU via the system bus (eg IO event- printing complete)
software: triggers an interrupt by making a system call asking for some action by the OS (eg asking for time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

interrupt routines

A

OS routines executed whenever an interrupt has occurred

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

function of FLIH

A

determine source of interrupt

initiate servicing of interrupt: select suitable dispatcher process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

privileged instruction set

A

some instructions can only be accessible to OS:

  • managing I/O
  • halting processes
  • interrupt management
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

dual mode

A
  • distinguishes between user-defined code and OS code. don’t let user execute instructions that could cause harm
    1) user mode 2) kernel mode/supervisor/system/privileged mode
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

when switch from user to kernel mode?

A
  • an interrupt occurs (hardware)
  • an error condition occurs in a user process (software)
  • user process calls on OS to execute a function requiring privileged instruction set
  • attempt to execute privileged instruction while in user mode
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

what does dispatcher do

A

assigns resources to processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
when is dispatcher later initiated
- when current process can no longer continue - CPU would be better used elsewhere e.g. after interrupt changes processes state, after system call resulting in current process not being able to continue, error causing process to suspend
26
advantages of multiprogramming
convenience- single user can run many tasks computational speed up- multiple CPUs and cores info sharing- shared files modularity- programming e.g. OOP
27
multiprogramming is...
when there are many processes in memory concurrently. when one process is waiting, the CPU executes (to running) one of the processes in the ready queue. CPU scheduling is the method used to support multiprogramming in process management
28
factors affecting multiprogramming
number of CPUs cores vs CPUs threading vs hyperthreading
29
thread
basic unit of CPU utilisation (like a subset of a process.) Threads share attributes (e.g. priority) with peer threads but may execute different code
30
benefits of using threading
responsiveness: process continues running even if part of it (i.e. thread) is blocked or is carrying out a lengthy operation resource sharing: processes share resources of their common process performance: cooperation of multiple threads in same job --> inc throughput + improved performance economy: allocating memory and resources to process creation = costly multiprocessor architecture: single threaded process can only run on one processor
31
thread management in user programs supported by which libraries
Java threads | win32 (windows API) threads
32
thread management in OS programs supported by...
a kernel module
33
multithreading models:
- one to one - many to one - many to many
34
multithreading model: one to one
one user application thread to one kernel thread
35
multithreading model: many to one
many user application threads to one kernel thread. programmer controls number of user application threads
36
multithreading model: many to many
N user application threads to <= N kernel threads | defined by capabilities of OS
37
how do windows schedule threads
priority based, pre-emptive scheduling algorithm. highest priority thread always running. thread selected by dispatcher runs until: -higher priority thread pre-empts it -it terminates (e.g. completed or another event causes it to terminate) -time quantum ends -system call is made (privileged)
38
windows priority scheme
32 level priority scheme: higher level, higher priority. priorities split into 2 classes: - variable class: level 1-15, priorities can change -real time class: level 16-31 -level 0: used in memory management
39
when thread's time quantum runs out
-thread interrupted and dispatched to ready queue. priority is lowered but never below base priority
40
when thread released from wait operation
-thread dispatched to ready queue. priority is increased depending on what thread was waiting for.
41
CPU scheduling improves
- utilisation of CPU | - speed of computer's response to its users
42
types of memory
- cache - main memory (volatile e.g. RAM) - storage memory (non-volatile e.g. flash memory in memory sticks, SSDs) - virtual memory
43
primary memory
cache and main memory
44
cache is managed by
CPU. It is supported by the CPU too.
45
similarity between cache and main memory
same memory management approaches
46
what is memory made up of
memory cells that store one bit: 0/1, dependent on whether voltage reaches predetermined arbitrary threshold
47
logical address aka virtual address
address created by the CPU
48
physical address
address on the physical memory
49
memory management unit
translate logical to physical addresses
50
user application view of memory
only deals with logical addresses- never knows real physical address
51
memory is split into two partitions
kernel processes and user processes in separate partitions. each partition is contained in a single contiguous section of memory
52
when is memory read most efficiently
when memory is contiguous, read in sequence and faster
53
memory allocation strategy has to be based on
- holes (block of available memory) - holes of various size scattered across memory - when a process arrives, its allocated a hole large enough to accommodate it
54
memory management: OS needs to store information about
-each partition: its allocated and free holes
55
memory hole allocation strategies
-first fit: allocate process to first available hole (best for speed) -best fit: allocate process to smallest (but large enough) of available holes (best for storage utilisation) -worst fit: allocate process to largest of available holes (for best/worst fit, must search entire list)
56
external fragmentation
memory space that is able to satisfy a request but is not contiguous. can reduce these occurrences by compaction: shuffling memory so free space is contiguous/in one block
57
internal fragmentation
memory that is successfully allocated but may be slightly larger than the requested memory. occurs when memory is stored in blocks divisible by 2,4,8,16 etc, so memory allocated is rounded up to one of the powers of 2.
58
describe paging
physical memory split into fixed-size blocks called frames, with size of power of 2 e.g. 512-8192. logical memory split into same sized blocks called pages. to run program with n pages, need to find n free frames = paging.
59
page table
manages the mapping of logical --> physical addresses, using an address translation
60
logical memory address generated by the CPU consists of
p: page number: used as an index in the page table, with field containing base address of each page in physical memory d: page offset: combined with base address to define physical memory address sent to memory unit
61
virtual memory address mapping
mapping of addresses between main memory and secondary storage: same technique as mapping b/w logical and physical address
62
memory protection bit
a valid/invalid protection bit is associated with each frame, attached to each entry in the page table. valid bit: the associated page is in the process's logical address space invalid bit: the associated page is not in the process's logical address space
63
virtual memory
capability of OS allowing programs to address more memory locations than are available in main memory- freeing developer burden of memory management so can focus on application development
64
how virtual memory works
there is more virtual memory than physical memory. many processes share the same physical address space. Only part of the process needs to be in physical memory for execution. free frame list of physical address is maintained by OS
65
how does process view address space
contiguous block of memory holding objects it needs to execute: - code (read only) - data (read and write) - heap (address space in memory which expands and contracts during execution. it holds data produced during execution) - stack (address space which holds data and instructions known at time of procedure call. grows when nested procedure calls are issued and shrinks as procedures return)
66
when is a page brought into memory
-only when required by a process during execution: this reduces - number of pages to be transferred and number of frames to be allocated to a process increases -number of processes executing, overall response time for a collection of processes
67
when reference made to page's address...
if invalid reference: abort. if not-in-memory: bring to memory valid/invalid bit associated with each page entry: if 1- in memory, if 0- not in memory --> page fault trap initially, the valid/invalid bit is set to 0 on all entries
68
a process requests access to an address within a page
Validity of request checked. If invalid request: -process terminated, clear all previously allocated main memory, throw "invalid memory access" message If valid request: -if page has been allocated frame in memory then break -else if page fault trap: identify free frame from free frame list, read page into identified frame, update process's page table, restart instruction interrupted by page fault trap + break
69
page fault rate
between 0-1. 0 = no page faults, 1 = every reference is a fault
70
effective access time
probability of a page being in memory * time to access memory + probability of page not being in memory *page fault overhead
71
page fault overhead includes
- context switching when relinquishing CPU - time waiting in paging device queue - time waiting in the ready queue - context switching when regaining the ready queue
72
no free frame
- when all memory frames available to a process are currently allocated - page replacement algorithm decides page in memory but not really in use, and swaps it out --> want to minimise number of page faults
73
basic page replacement
- find location of the desired page on disk - find a free frame: if there is a free frame, use it, else use page replacement algorithm to find a victim page. write victim page to disk, update the process' page table and free frame list accordingly - read desired page into the newly freed frame and update the page table and free frame list - restart the user process
74
how to evaluate replacement algorithms
- run each algorithm on a particular string of memory references (reference string) - compute the number of page faults per algorithm on that string - compare the number of page faults: aiming for the lowest possible number of page faults given the reference string
75
FIFO algorithm- page replacement
- easy to implement as FIFO queue - associated with each page is the time it was brought into memory - no need to track ageing, victim is oldest page.
76
beladys anomaly (suffered by FIFO)
for some page fault algorithms, page fault rate may increase as number of allocated frames increase. usually inc memory improves memory response and throughput but some combinations of page demands (reference strings) will have fewer page faults with fewer available frames
77
optimal page (OPT) fault algorithm
the page that is replaced is the one that will not be needed for the longest period of time --> fewest page faults this requires knowing the future = impossible to implement
78
Least recently used (LRU) algorithm
- associate each page with time of its last use. victim page = page used least recently - stack algorithm - stack + counter algorithms can never exhibit Belady's anomaly: set of pages in memory for n frames is always a subset of the set of pages that would be in memory for n+1 frames
79
counter implementation
each page entry has a counter. whenever a page is referenced, the time is recorded. when a page needs to be changed, look at the counters to determine which are to be changed.
80
stack implementation
keep a stack of page numbers, as a doubly linked list. no searching required, LRU page (bottom of stack) is pointed at by tail pointer
81
LRU approximation algorithms: additional reference bit
- each page associated with a bit, initially 0 - when a page is referenced, bit is set to 1 - replaced one of the pages with the bit set to 0 - replace random page when none set to 0
82
issue with additional ref bit (LRU)
-no way of knowing if page is LRU, MRU or in between
83
LRU approximation algorithms: second chance algorithm
- variation of FIFO queue, visualised as a circular queue - page enters main memory, joins tail of queue and ref bit set to 1 - queue is full and page needs selecting: if page's ref bit = 0 = victim page, else if 1 = set ref bit to 0 and move onto next page in queue
84
different ways of allocating frames
- global replacement - local replacement - fixed allocation: equal allocation/proportional allocation - priority allocation
85
allocating frames: global replacement + local replacement
global -select replacement frame from set of all frames: one process can take a frame from another local -select replacement frame from set of allocated frames for the process
86
allocating frames: fixed allocation
equal allocation | proportional allocation- allocate according to size of process
87
allocating frames: priority allocation
when page fault occurs, select for replacement frame from a process with lower priority number
88
thrashing
when a process spends longer paging than actual work. | page-fault rate is high if process doesn't have enough pages
89
pre-paging
page into memory at one time all the pages that'll be needed: for systems using working set model - keep with each process the list of pages within its working set model - if process suspended, remember working set for that process - process resumed: bring back entire working set back into memory before restarting process
90
point of pre-paging
prevent thrashing but must ensure cost of prepaging is less than cost of servicing page faults
91
page-fault frequency scheme
approach to prevent thrashing. establish an 'acceptable' page fault rate. rate too low?: process lose frames rate too high?: process gains frames