OS unit 3 and 4 Flashcards

(20 cards)

1
Q

Define concurrency in the context of process/thread synchronization.
What is mutual exclusion in operating systems?

A

Concurrency: Multiple processes/threads simultaneously
Mutex: prevents multiple processes from entering in critical state at the same time
Utilized using sempahores and locks
Characteristics:
1 One process in CS at a time
2 Other processes wait
3 Used to prevent deadlocks and race condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Conditions for deadlock. Break down the strategies used for deadlock prevention into their core principles.

A

1 Mutual Exclusion (shareable resources or multiple instances of one resource)
2 Hold and wait (requests all required resources before execution)
3 No pre-emption (allow pre-emption)
4 Circular Wait (ordering of resource types - allow according to enum number)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the Ostrich Algorithm in the context of deadlock?
List the necessary conditions for a deadlock to occur.

A

The Ostrich Algorithm is a non-preventive approach to handling deadlocks.
Used When: Deadlocks are extremely rare and the cost of prevention, detection, or recovery outweighs the impact of the deadlock itself.

Typical Use Case: UNIX or Windows, where deadlocks may occur very infrequently, and system reboot or manual intervention can resolve them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Critical section problem prevention

A

Mutual Exc
Progress (no forcing each other)
Bounded Wait (No indefinite time)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

H/W | S/W mutex

A

H/W
Adv: Faster execution, low overhead, reliability
Disadv: limited protability, h/w dependence

S/W
Adv: portability, felxibility, easy to implement
Disadv: higher overhead, more prone to error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Semaphore | Monitor (7)

A

1 controls access to shared resources | high-level synch method for shared resources
2 low-level synch primitive | high-level synch primitive
3 complex to implement | easy due to encapsulation
4 manual resource management | automatic resource management
5 does not provide mutex inherently | provides through lock mech
6 can implemented in both h/w, s/w | implemented using lock and combination values
7 uses explicit signal and wait | uses implicit signal through condition variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

mutex | semaphore (8)

A

1 allow only one process to access | allow access to shared resources
2 only one process can lock | counting lock multiple processes can
3 owned by process that locks it | no ownership
4 binary | can have counting value
5 protects critical section | manage shared resources
6 operations: lock and unlock | operations: signal and wait
7 can lead to starvation | less prone to starvation due to fifo
8 simple to implement | complex to implement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is memory management in an operating system?

A

Memory management in an operating system (OS) refers to the process of controlling and coordinating computer memory, assigning portions called blocks to various running programs to optimize overall system performance.
Allocation, Deallocation
Tracking
Protectio
Paging and Segmentation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Fixed partitioning | Dynamic partitioning (adv disadv)

A

Dynamic -
adv: efficient, better utilization, flexible allocation
disadv: external frag, complex management, compaction overhead

Fixed-
adv: simplicity, fast allocation, low overhead
disadv: internal frag, limited flexibility, static nature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Buddy system

A

It is a memory allocation technique used in operating systems to manage memory efficiently and reduce fragmentation.
- Total memory is treated as a single block of size 2^U
- A minimum block size 2^L
adv: reduce fragmentation, fast merging, efficient splitting
disadv: complexity, internal frag

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is relocation in memory management necessary?

A

1 Multiprogramming: Multiple processes to coexist in memory.
2 Dynamic memory allocation: Programs can be moved during execution.
3 Efficient memory use: OS can relocate processes to compact free space.
Types:
Static (compile time), Dynamic (run time)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How would you implement a Least Recently Used (LRU) page replacement algorithm in a system with virtual memory?

A

LRU - page replacement algorithm
Keep track of least recently used pages in memory

DS = HashMap (maps page numbers to nodes in LL)+ Doubly Linked List (maintains order of the pages)

Process:
If page in memory :

If page not in memory:
-If memory full:
–Remove page from tail (least recently used)
-Insert new from head
-Add to map
TC = O(1)
SC = O(N)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Virtual memory

A

Memory management technique used by OS to create illusion of larger main memory (RAM)
Key Components:
1 Abstraction (each process its own address)
2 Paging
3 Page table
4 Demand paging (Pages loaded only when need onto RAM)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

If a process requires more memory than its allocated partition in dynamic partitioning, how would the system handle it?

A

1 Swapping
2 Memory Compaction
3 Paging/Segmentation
4 Dynamic Expansion
5 Process Termination or Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain the concept of page replacement in virtual memory.
What happens during a page fault in demand paging?

A

Page replacement - replacing pages when demanded page not in main memory and RAM full
- no free frame availavble
- page not available in main mem
- methods (FIFO, LRU, Optimal)
Demand Paging - not loaded until demand - exist in secondary swap space

Page Fault - interrupt when access to non existent page asked by process
1 Page fault interrupt
2 OS searches page table for missing frame
3 Locate on disk : Fetch page from sec memory (swap space)
4 Find free frame
5 Update page table
6 Resume execution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How would you assess the efficiency of different page replacement algorithms (FIFO, LRU, Optimal)?

A

FIFO
-des: first in first out
- Simple and easy to implement
-drawback: may cause more page faults
LRU
-des: least recently used
- assumes recent past = future
-drawback: more efficient than FIFO but complex to implement
Optimal
-des: removes least future use
- Theoretically most ideal
-drawback: impossible to implement

1 Simulate Algorithms (same seq pages, same no of pages)
2 Analyze with patterns
3 Compare complexity and efficiency

17
Q

Analyze how thrashing can occur in a system with virtual memory and how it can be mitigated.

A

performance degradation condition in a system using virtual memory, where the CPU spends more time swapping pages in and out of memory than executing actual instructions.
1 Page Fault Frequency (PFF) Control
2 Reduce Degree of Multiprogramming
3 Use Better Page Replacement Algorithms
4 Memory Allocation Adjustments

18
Q

How buddy system handles non power of 2

A

Round up to nearest power of 2
Allocate block and maintain
Find or split block

19
Q

Dynamic partitioning handles memory frag issues

A

Memory utilization
Fragementation metrics
Simulated load testing
Failed allocation despite free memory

20
Q

Paging | Segmentation

A

Fixed size block | Variable
Internal frag | External
Address space broken into fixed called pages| broken into variable called seg
Page table | Segment table
Faster in mem access | Slow
Does not allow sharing procedures | Allows
Paging address space is 1d | Many independent address spaces
H/W decided page size | User decides seg size