Anki Import Flashcards

(599 cards)

1
Q

4 components of an computer system

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

application program

A

A program designed for end-user execution -word processors -spreadsheets -compilers -web browsers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

operating system

A

software that manages a computer’s hardwarehelps applications worklets users interact with the computer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

ease of use

A

the amount of difficulty and complexity in some aspect of computing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

resource utilization

A

The percentage or amount of hardware or software resources currently being used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

embedded computer

A

A computer inside a larger system that does specific tasks with little or no user interface. 🛠️ Examples: microwaves or car engines

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

resource allocator

A

it’s the operating system- determines how resources are used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

control program

A
  1. Protects programs – prevents crashes and improper use 2. Manages hardware – especially controls I/O devices
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Moore’s law

A

law predicting that the number of transistors on an integrated circuit would double every 18 months

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

kernel

A

The kernel is the core component of an operating system that -controls the hardware -manages processes -handles permissions and security 🧠 It operates in kernel mode with full access to hardware.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

system program

A

-runs in user space, not the kernel -handles smaller, more specific OS-related tasks examples- device drivers compilers shells

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

bus

A

Provides access between components (such as the CPU and I/O devices) and memory. It allows multiple devices to send and receive data over the same wires.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

device driver

A

An OS component that translates between hardware and software, managing I/O and providing standard access to devices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

interrupt

A

hardware mechanism that enables a device to notify the cpu that it needs attention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

interrupt handler

A

Receives interrupt signals Prioritizes them by importance Queues them for processing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

interrupt vector

A

table of memory addresses that point to interrupt handlers 📍 It is stored in kernel space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

bit

A

1 or 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

byte

A

8 bits

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

interrupt-request line

A

a wire in the cpu that the processor checks after each instruction to see if an interrupt has occurred

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

interrupt-handler routine

A

OS routine triggered by an interrupt signal. It saves the current state, identifies the interrupt, and calls the appropriate service to handle it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

4 steps of an interrupt

A

1- device controller raises an interrupt on the interrupt request line 2- cpu catches the interrupt and then 3- dispatches it to the interrupt handler 4- the handler clears the interrupt by servicing the device

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

nonmaskable interrupt

A

An interrupt that cannot be ignored, delayed, or disabled. Used for critical events like hardware failures or emergency shutdowns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

maskable interrupt

A

a type of interrupt that can be delayed or disabled (masked) by the CPU using a control bit. 🛑 Used for non-critical events where the CPU can choose to ignore the signal temporarily.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

interrupt chaining

A

A technique where each entry in the interrupt vector points to a list of handlers, allowing multiple devices to share one interrupt.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
two types of nonvolatile storage
mechanical- eg. hdd, optical discs electrical- flash memory, SSD (NVM)
26
3 volatile storage types in order of speed and capacity
27
4 nonvolatile storage types in order of speed and capacity
28
volatile storage is also referred to as
memory
29
main memory
RAM (random access memory)- Fast, volatile, rewritable memory where computers run most programs during execution.
30
DRAM
dynamic random-access memory- A type of main memory made with semiconductors, where each bit is stored in a separate capacitor. It is cheap and dense, but must be refreshed frequently.
31
bootstrap program
first program to run on computer power on, loads the OS
32
firmware
Software stored in non-volatile memory (like ROM or EEPROM), used for booting and controlling low-level hardware.
33
Order of bytes
KB → MB → GB → TB → PB (Kilobyte, Megabyte, Gigabyte, Terabyte, Petabyte)
34
von Neumann architecture
The structure of most computers, in which both process instructions and data are stored in the same main memory.
35
two modes of operation
user mode kernel mode
36
kernel mode
Also called supervisor mode, system mode, or privileged mode. A CPU mode where all instructions (including hardware and I/O) are allowed. The OS kernel runs in this mode.
37
mode bit
single binary bit used in a computer's hardware to distinguish between kernel and user mode
38
privileged instructions
CPU instructions that can only be executed in kernel mode.
39
4 examples of privileged instructions
Switching to kernel mode I/O control Timer management Interrupt management Privileged instructions can only be executed in kernel mode to protect system stability and security.
40
protection rings- Intel's ring system? ARM v8?
A model for privilege separation in operating systems, where lower ring numbers = more access. Intel: 4 rings (Ring 0 = kernel, Ring 3 = user; Rings 1–2 rarely used) ARM v8: Uses 7 modes for finer control
41
Virtual Machine Manager (VMM)
aka hypervisor Software that manages virtual machines and must access both user mode and kernel mode to simulate full system behavior.
42
timer
A hardware component that triggers an interrupt after a set time. Used to prevent any one process hogs the CPU.
43
batch operating system
groups similar jobs together and executes them in batches to maximize CPU usage. 💻 It operates without user interaction during execution.
44
multiprogramming
An OS technique that loads multiple programs into memory to maximize CPU use by switching to another when one is waiting (e.g., for I/O). 🧠 Key Distinction: Multiprogramming ≠ multitasking — it's about keeping the CPU busy, not true simultaneous interaction.
45
CPU scheduling
The OS method for choosing which process runs on the CPU and how long it gets to run, to ensure efficient multitasking.
46
time-sharing system
An OS that lets multiple users (like multiple terminals) interact with the system at the same time by rapidly switching between users’ processes using CPU scheduling. 🧠 Hook: Time-sharing = illusion of simultaneity — everyone thinks they’re using the computer alone.
47
multitasking
OS's ability to run multiple processes seemingly at the same time Achieved by rapidly switching between them using CPU scheduling Creates the illusion of simultaneous execution
48
hard real-time system
A system where missing a deadline is unacceptable. Used in critical applications like medical devices or flight control systems. 🧠 Hook: If it's late, people can die.
49
soft real-time system
A system where meeting deadlines is preferred, but occasional delays are acceptable. Used in non-critical applications like video streaming or online games. 🧠 Hook: If it’s late, it’s annoying — not deadly.
50
multiprocessor system AKA parallel system
A computer system with two or more CPUs working together to process tasks simultaneously, improving performance and reliability.
51
virtual memory system (VMS)
memory management technique that uses disk space like RAM so programs can run as if there’s more memory.
52
distributed operating system
An OS that manages multiple networked computers to act like one system. 🌐 Allows resource sharing, collaborative processing, and faster performance. (For example- Google File System (GFS) — runs across many servers but appears as one unified system to users.)
53
loosely coupled systems
Describes a kernel design in which the kernel is composed of components that have specific and limited functions.
54
fault tolerance
The ability of a system to handle the failure of a processor by redistributing its tasks to other processors, ensuring continuous operation. 🧠 Hook: One processor crashes — the others step in and carry on.
55
I/O subsystem
A part of the OS that hides hardware details and manages I/O operations. 🧠 Only the device driver understands the specific device; the rest of the OS stays hardware-agnostic.
56
word
A fixed-size group of bits that a CPU processes as a unit. Its size (e.g., 32-bit, 64-bit) depends on the computer’s architecture.
57
EEPROM
Electronically erasable programmable read-only memory- A non-volatile memory that stores the bootstrap program and static data like hardware settings or serial numbers.
58
user mode
A CPU mode for running user applications, where privileged instructions are blocked. Used to protect system resources and maintain security.
59
real-time system
An OS that ensures tasks are completed within strict time limits.
60
command interpreter
interprets user commands and causes actions based on them
61
shells
a command interpreter on systems with multiple command interpreters
62
springboard
iOS touch-screen interface
63
power users
Users with unusually deep knowledge of a system.
64
shell script
A file containing a sequence of shell commands that are executed together, typically used to automate tasks in a specific shell environment.
65
system call
A software-triggered interrupt to request a service from the kernel
66
core
a CPU component that executes instructions
67
multiprocessor systems
two or more processors, each with a single-core CPU
68
application programming interface (API)
A set of functions and protocols that allow applications to interact with the operating system. They invoke system calls on behalf of the programmer.
69
Runtime Enviroment (RTE)
A complete set of software that supports the execution of programs, including the compiler, libraries, loaders, and system-call handling for the programming language being used.
70
System-Call Interface
An interface that serves as the link to system calls made available by the operating system and that is called by processes to invoke system calls.
71
lock
A mechanism that restricts access by processes or subroutines to ensure integrity of shared data
72
message-passing model
An interprocess communication model where processes exchange messages
73
Shared-memory model
An interprocess communication model where processes share a memory region to exchange data.
74
SMP
Shared Memory Multiprocessing Each CPU processor is a peer and can perform all tasks, including OS functions and user processes.
75
multicore
A CPU architecture with multiple cores on one chip for parallel execution
76
clustered system
A system that combines multiple CPUs across two or more connected computers (nodes) over a local area network to work together as one system.
77
high-availability
a service that will continue even if one or more systems in the cluster fail
78
graceful degadration
The ability of a system to continue providing service proportional to the amount of hardware still functioning.
79
fault-tolerant system
A system that can continue operating even if a single component fails.
80
asymmetric clustering
A clustering setup where one machine runs the applications while another machine (in hot-standby mode) monitors it and takes over if it fails.
81
hot-standby mode
A standby computer that monitors an active server and takes over operations if the active server fails.
82
symmetric clustering
A clustering setup where two or more machines run applications and monitor each other, sharing the workload.
83
high-performance computing
A computing facility designed for use with a large number of resources to be used by software designed for parallel operation.
84
parallelization
The process of dividing a program into separate components that run in parallel on individual cores in a computer or computers in a cluster.
85
Distributed Lock Manager (DLM)
A function in clustered systems that controls access to shared resources to prevent conflicts.
86
storage-area network (SAN)
A local-area storage network allowing multiple computers to connect to one or more storage devices.
87
Processor
A physical chip that contains one or more CPUs.
88
cpu
The hardware that executes instructions
89
system service
aka system utilities A set of applications or utilities included with or added to the operating system that provide additional services beyond what the kernel offers (e.g., file sharing, printing, or security features).
90
registry
A file, set of files, or system service used to store and retrieve configuration information.
91
service
A program that runs in the background on a computer or server to do a specific job for other programs or users.
92
subsystem
A subset of an operating system responsible for a specific function
93
monolithic structure
a kernel without structure (such as layerrs or modules)
94
tightly coupled systems
Systems with two or more processors that operate in close communication, sharing the computer bus and often memory, clock, and peripheral devices. Changes in one part can affect the entire system due to their interdependence.
95
layered approach
A kernel architecture that divides the operating system into multiple levels. Layer 0: Hardware Top layer (N): User interfaceEach layer builds on the one below it, promoting modularity and simplified debugging.
96
Mach
An operating system developed at Carnegie Mellon University that uses a microkernel architecture and supports threading.
97
Microkernel
An OS structure that removes all nonessential components from the kernel, placing them in user-space programs and leaving a small, minimal core for essential services
98
process
a program loaded into memory and executing
99
job
a set of commands or processes executed by a batch system
100
task
A process, a thread activity, or, generally, a unit of computation on a computer
101
process state "New"
the process is being created
102
process state "Running"
instructions are being executed
103
process state "Waiting"
The process is waiting for some event to occur
104
process state "Ready"
The process is waiting to be assigned to a processor
105
process state "Terminated"
The process has finished execution
106
What causes a process to move from Running to Ready?
The process was using the CPU, but got interrupted (e.g., time slice expired in round-robin scheduling).
107
What causes a process to move from Running to Waiting?
The process requests I/O or waits for an event, so it voluntarily leaves the CPU and moves to the waiting state until the event completes.
108
program counter
A CPU register that holds the memory address of the next instruction to be fetched and executed.
109
state
The condition of a process, including its current activity and associated memory or disk contents.
110
Process control block (PCB) contains?
please pick ripe strawberries make an ice cream Process state (e.g., new, ready, running, waiting, halted) Program counter (next instruction address) CPU registers (accumulators, stack/index registers, etc.) CPU-scheduling information (priority, queues, etc.) Memory-management information (base/limit registers, page/segment tables) Accounting information (CPU time used, job ID, limits) I/O status information (list of I/O devices and open files)
111
thread
An execution unit within a process. Single-threaded processes run one task at a time, while multithreaded processes can run multiple tasks simultaneously using multiple threads.
112
resident routines
Resident routines are parts of the operating system that remain in main memory at all times while the system is running. They provide essential services (like file handling, memory management, and device control) that support application programs as they execute.
113
transient routines
Transient routines are loaded into memory only when needed and then removed to free up space. They perform non-essential or occasional tasks (like formatting a disk), allowing the OS to conserve memory.
114
parts of a PCB
115
degree of multiprogramming
the number of processes in memory
116
process scheduler
A scheduler that selects an available process for execution on a CPU
117
I/O-bound process
spends most of it's time doing I/O than it spends doing computations
118
CPU-bound process
A process that spends more time executing on CPU than it does performing I/O.
119
ready queue
The set of processes ready and waiting to execute.
120
wait queue
a queue holding processes waiting for an event to occur before they need to be put on CPU
121
dispatched
Selected by the process scheduler to be executed next.
122
CPU scheduler
Kernel routine that selects a thread from the threads that are ready to execute and allocates a core to that thread.
123
context
The state of a process's execution, including the program counter, CPU registers, and memory context (such as the stack and heap).
124
save state
The act of copying a process’s context (program counter, registers, etc.) to save its current state, allowing the CPU to pause its execution and switch to another process.
125
state restore
The process of loading a saved context (e.g., registers, program counter) from memory back into the CPU registers, allowing the OS to resume execution of a previously paused process.
126
context switch
The act of saving the state of one process and restoring the state of another. Performed by the dispatcher so the CPU can switch between processes.
127
tree
A hierarchy of parent and child processes where each process can spawn others, forming a branching structure.
128
process identifier (pid)
A unique number assigned to each process by the OS to identify and manage it.
129
systemd
The first process started at boot in Linux (pid = 1), responsible for starting all other processes and services. A modern replacement for init that provides more features and flexibility as the system's initial process.
130
init
The first process started at boot in UNIX (pid = 1), responsible for starting all other processes and services.
131
ps -el
A UNIX/Linux command that lists all current processes with detailed information.
132
fork()
A system call that creates a new child process by duplicating the calling (parent) process.
133
exec()
replaces the current process's code with a new program — it does not create a new process and does not return if successful.
134
wait()
A system call used by a parent process to pause execution until one of its child processes terminates.
135
exit()
A system call that terminates a process and returns control to its parent.
136
pstree
A UNIX/Linux command that shows processes in a tree format based on parent-child relationships.
137
CreateProcess()
(Windows) Creates a new child process and immediately loads a specified program into its memory space. Requires 10+ parameters.
138
zombie
A process that has terminated but whose parent has not yet called wait() to collect its state and accounting information.
139
orphan
A process whose parent has terminated before it could call wait() to collect the child’s status.
140
interprocess communication (IPC)
A mechanism that allows processes to communicate and exchange data
141
producer
A process that generates data to be consumed by another process, known as the consumer.
142
consumer
A consumer is a process that receives and uses data produced by another process, the producer.
143
unbounded buffer
A buffer with no practical size limit, allowing the producer to continue producing regardless of the consumer’s speed.
144
bounded buffer problem
Producer produces, but buffer is full Consumer consumers, but buffer is empty
145
blocking
aka synchronous. A communication mode where the sender waits until the message is received and the receiver waits until a message is available.
146
nonblocking
aka asynchronous. A communication mode where the sender sends the message and continues execution, and the receiver retrieves the message if one is available or gets null.
147
rendezvous
A synchronization point where a blocking send and a blocking receive meet.
148
direct communication
A communication mode where each process must explicitly name the recipient or sender.
149
single-threaded
A process has only one thread of control and executes on only one core at a time.
150
multithreaded
A process has multiple threads of control, allowing simultaneous execution points.
151
threads
Lightweight units of process execution that share the same memory space but run independently.
152
register set
collection of registers used to store data and instructions currently being processed by the CPU
153
user thread
A thread that operates in user mode, managed by user-level thread libraries without direct kernel involvement.
154
kernel threads
Supported and managed directly by the operating system in kernel mode.
155
many-to-one thread model
Maps many user-level threads to one kernel thread. Thread management is done in user space, but if one thread makes a blocking system call, the entire process blocks. It doesn’t support parallelism on multicore systems.
156
one-to-one thread model
Maps each user thread to one kernel thread. It supports greater concurrency and allows parallel execution on multiprocessors. However, creating many threads may burden system performance. (Windows and Linux use)
157
many-to-many thread model
Maps many user-level threads to a smaller or equal number of kernel threads. Supports parallelism and blocking system calls don’t block all threads. It's more flexible but harder to implement.
158
two-level thread model
Similar to many-to-many but allows some user-level threads to be bound to specific kernel threads, combining flexibility with control. Rarely used due to implementation complexity.
159
user threads
Managed above the kernel in user space, without direct support from the kernel.
160
6 things in a thread control block (TCB)
Thread IDThread stateCPU information- 1. program counter and 2. register contentsThread priorityPointer to process that created this threadPointer to threads created by this thread
161
data parallelism
Performing the same operation on divided data across cores
162
task parallelism
Running different operations (threads) on different cores
163
signal
A signal is a way for the OS to notify a process that an event has occurred, like an error or user interrupt, allowing the process to react or (usually) terminate.
164
default signal handler
the built-in handler for signals unless a process uses its own
165
synchronous signal
A signal triggered by the process itself due to its behavior (e.g., division by zero, invalid memory access).
166
asynchronous signal
A signal triggered by external events (e.g., Ctrl+C, timer expiration), not directly caused by the process’s current execution.
167
pthread_kill()
in UNIX used to send a signal to a specific thread within the same process
168
asynchronous procedure call (APC)
In Windows, a function a thread sets to run when it gets a certain notice. Simpler than UNIX.
169
thread cancellation
ending a thread before it finishes
170
pthread_cancel()
a function that requests the cancellation of a specific thread in Pthreads
171
process synchronization
The coordination of processes to ensure they operate smoothly without interfering, especially when accessing shared resources.
172
race condition
A situation where the outcome of processes depends on their execution order, often causing inconsistent results.
173
mutex
mutual exclusion- a lock that ensures only one process can access a critical section or resource at a time, preventing race conditions
174
semaphores
synchronization tools used to control access to shared resources by multiple processes
175
starvation
When a process waits indefinitely for a resource because other processes keep getting access first.
176
circular wait
a condition where each process in a set is waiting for a resource held by another process in the same set, contributing to deadlock
177
What is fair resource allocation and what does it prevent?
a principle ensuring that all processes have fair access to resources, preventing starvation and ensuring balanced system performance
178
producer-consumer scenario
One process creates data and puts it into a shared space (buffer), while another takes it out. Synchronization makes sure the producer doesn’t add to a full buffer and the consumer doesn’t take from an empty one.
179
concurrency
the ability of an operating system to execute multiple processes simultaneously, improving performance and responsiveness
180
What are the four sections of a typical process loop related to critical section handling?
Entry section – Code where the process requests permission to enter the critical section. Critical section – A section of code responsible for changing data that must only be executed by one thread or process at a time to avoid a race condition. Exit section – The section of code within a process that cleanly exits the critical section. Remainder section – Whatever code remains to be processed after the critical and exit sections.
181
What 3 conditions must a solution to the critical-section problem satisfy?
Mutual exclusion – Only one process can be in its critical section at a time. Progress – If no one is in the critical section, processes must be able to decide who goes next. Bounded waiting – There must be a limit to how long a process waits to enter its critical section.
182
Preemptive vs Nonpreemptive Kernel
Preemptive: can be interrupted in kernel mode, more responsive, but risk of race conditions. Nonpreemptive: a kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU, safer, but less responsive.
183
What is the correct order of operations for protecting a critical section using mutex locks?
acquire() followed by release()
184
semaphone usage order
call wait() before entering a critical section, and signal() after exiting to ensure mutual exclusion
185
binary semaphore
a semaphore that can only be 0 or 1; behaves like a mutex lock
186
counting semaphone
a semaphore with a value ranging over an unrestricted domain
187
_____ can be used to prevent busy waiting when implementing a semaphore.
waiting queues 🧠 What is busy waiting? Busy waiting happens when a process (or thread) keeps using the CPU to check over and over whether a condition is true — instead of just going to sleep and letting other processes use the CPU.
188
wait() / signal()
wait() (P): Decreases the semaphore. If result is negative, the process is blocked (waits). signal() (V): Increases the semaphore. If there are waiting processes, one is unblocked.
189
first readers-writers problem
Readers have priority; writers may starve if readers keep coming.
190
second readers-writers problem
Writers have priority; once a writer is waiting, no new readers may start. Readers may starve.
191
What best describes the situation in the dining philosophers problem if all five philosophers attempt to eat at the same time?
Deadlock will occur
192
dining-philosophers solution
Represent each chopstick as a semaphore. Philosophers wait() to pick up chopsticks and signal() to put them down.
193
monitor
A high-level synchronization tool that ensures only one thread can access a shared resource at a time—automatically handles mutual exclusion. Used to prevent race conditions in things like shared memory or file access.
194
Suspends a process inside a monitor until another process signals the same condition variable.
wait() (in monitors)
195
Wakes up one suspended process that was waiting on a condition variable inside the monitor.
signal() (in monitors)
196
synchronization
the coordination of processes used to ensure they operate without interfering with each other
197
CPU burst
when a process is actively using the CPU to execute instructions like calculations or memory operations
198
I/O burst
when a process is waiting for i/o operations to complete, like reading from a disk or waiting for user input
199
CPU-I/O burst cycle
the repeating pattern of a process alternating between using the CPU and waiting for I/O operations to complete
200
nonpreemptive scheduling
aka cooperative. Once a process has the CPU, it keeps it until it finishes or blocks. No other process can interrupt.
201
preemptive scheduling
The OS can take the CPU away from a running process and give it to another. Used in most modern OSes.
202
dispatcher
The kernel routine that gives control of a core to the thread selected by the scheduler. It does the actual switch.
203
Dispatch latency
The time it takes for the dispatcher to stop one thread and start another running.
204
CPU utilization and optimal utilization
the percentage of time the CPU is actively working; the optimal utilization is around 90 percent
205
throughput
The number of processes completed per unit of time.
206
turnaround time
Total time from process submission to completion, including waiting, CPU, and I/O time.
207
waiting time
The total time a process spends in the ready queue waiting for CPU
208
response time
Time from submitting a request to when the process starts producing output.
209
convoy effect
A scheduling issue where multiple threads are forced to wait for a single long-running thread to release the CPU, reducing overall CPU and device utilization.
210
shortest-job-first (SJF)
A non-preemptive scheduling algorithm that selects the thread with the shortest estimated next CPU burst time.
211
exponential average
A method used to predict the next CPU burst using a weighted average of previous burst times, giving more importance to recent bursts. (for SJF)
212
shortest-remaining-time-first (SJRF)
A preemptive version of SJF that always runs the thread with the least time left to finish its CPU burst.
213
round-robin (RR)
A preemptive scheduling algorithm where each thread gets the CPU for a fixed time quantum before being rotated out.
214
time quantum/ time slice
The maximum amount of time a thread can hold the CPU before being preempted in round-robin scheduling.
215
priority-scheduling
A scheduling algorithm that assigns a priority to each thread and selects the one with the highest priority for execution.
216
aging
A technique to prevent starvation by gradually increasing a thread’s priority the longer it waits.
217
online scheduler
A scheduler that makes task decisions during the system’s execution.
218
offline scheduler
A scheduler that plans all task execution before the system starts running.
219
static scheduler
A scheduler with fixed priorities set before execution begins.
220
dynamic scheduler
A scheduler that changes task priorities during execution based on current system conditions.
221
feasibility tests (schedulability tests)
Analytical methods used to determine if all scheduled tasks can meet their deadlines.
222
multilevel queue
a scheduling algorithm dividing the ready queue into multiple distinct queues
223
foreground process
An interactive process that actively receives user input and typically gets higher CPU priority.
224
background process
A non-interactive or batch process running in the background with lower CPU priority.
225
multilevel feedback queue
A scheduling algorithm where processes can move between queues based on CPU usage, allowing better handling of long and short jobs.
226
symmetric multiprocessing (SMP)
each processor manages its own scheduling, handling both kernel and user threads with potential contention for system resources
227
asymmetric multiprocessing (AMP)
A system where one master processor controls all scheduling and I/O, while other processors run only user code. Used in older systems or embedded devices where simplicity matters.
228
chip multithreading (CMT)
A CPU design where each core can run multiple hardware threads, improving performance by keeping the processor busy and reducing idle time.
229
load balancing
A method for evenly distributing tasks across processors to prevent some from being overloaded while others are idle.
230
push migration
A load-balancing method where busy processors actively push threads to less busy processors.
231
pull migration
A load-balancing method where idle processors pull threads from overloaded ones.
232
processor affinity
The practice of keeping a thread on the same processor to improve cache performance and reduce migration overhead.
233
soft affinity
A preference to keep a thread on the same processor, but allows the OS to move it if needed.
234
hard affinity
The thread can only run on specific CPU cores Used for performance tuning or real-time systems where predictability matters. E.g., locking a thread to Core 1 to reduce latency.
235
memory stall
A delay in execution when a thread must wait for data to be fetched from main memory rather than cache.
236
hardware threads
Multiple threads managed by a single core, used to keep execution units active during stalls.
237
deadlock
a condition where two or more processes or threads are unable to proceed because each is waiting for the other to release a required resource
238
4 conditions a deadlock must have
Mutual exclusionHold and waitNo premptionCircular wait
239
3 main methods of deadlock handling
PreventionAvoidanceDetection
240
system resource-allocation graph
directed graph used to describe deadlock situations precisely
241
Banker's Algorithm
Prevents deadlocks by only granting resource requests if it keeps the system in a safe state — meaning all processes can still finish without getting stuck.
242
A wait-for graph scheme is not applicable to a resource allocation system with multiple instances of each resource type. T or F?
T
243
thread dump
In Java, a snapshot of the state of all threads in an application; a useful debugging tool for deadlocks.
244
What's the difference between deadlock detection and the banker's algorithm?
Deadlock detection only checks for stuck processes after a problem happens. Banker's algorithm prevents problems before they happen. In detection, if a process holds no resources, it’s marked as safe. Detection says: if any process can’t finish → it's in deadlock.
245
How can we prevent starvation in deadlock recovery?
Track how often a process is rolled back. Add the rollback count to the victim selection cost to avoid punishing the same process over and over.
246
What must happen after preempting a resource for deadlock recovery?
Rollback the process — either fully restart it or roll it back to a safe state so it doesn’t continue without the needed resource.
247
What’s the first step in using resource preemption for deadlock recovery?
Select a victim — Choose which process/resources to preempt based on cost (like how many resources are held or how much work is lost).
248
Omar is working on a database management system at Seawaibee that frequently experiences deadlocks due to resource allocation. Which technique should Omar use to resolve this issue?
Circular wait prevention
249
What does a process text section contain?
the executable code
250
What does a process data section contain?
global variables
251
What does a process heap section contain?
memory that is dynamically allocated during program run time; it stores temporary variables
252
What does a process stack section contain?
temporary data storage when invoking functions (such as function parameters, return addresses, and local variables)
253
long-term scheduler
It selects which process has to be brought into the ready queue.
254
medium-term scheduler
It selects which process to remove from memory by swapping.
255
short-term scheduler
It selects which process has to be executed next and allocates the CPU.
256
The only state transition that is initiated by the user process itself is:
block
257
independent process
A process whose task is not dependent on any other process. It does not share resources or data with others.
258
cooperating process
A process that depends on or interacts with other processes to complete a task. It shares resources like CPU, memory, and I/O.
259
base and limit registers and what they ensure
Base register: Holds the starting physical address of a process’s memory. Limit register: Holds the length (or size) of that memory. Together, they define the logical address space and ensure a process can only access its own memory — if a logical address exceeds the limit, a trap (error)occurs.
260
address binding
The process of mapping a program's logical addresses to physical memory locations.
261
When does compile-time binding occur, and what's a downside?
During program compilation; downside is it requires known memory addresses and must be recompiled if they change.
262
When does load-time binding occur?
When the program is loaded into memory. You can reload the program if memory locations change, no recompilation needed.
263
When does execution-time binding occur? What does is require?
During program execution, allowing movement in memory while running. Requires hardware like a memory management unit (MMU). The exact physical memory address of instructions/data is not known ahead of time. Instead, it's calculated dynamically as the program runs.
264
memory management unit (MMU)
Hardware that handles the translation of virtual addresses to physical addresses. Works with the page table, and may use a TLB for speed. Also enforces memory protection (e.g., prevents one process from accessing another’s memory).
265
What is absolute code, and what is its limitation?
Absolute code contains fixed memory addresses set during compile-time. If the program’s memory location changes, it must be recompiled.
266
logical address
An address generated by the CPU during program execution. Also called a virtual address. It gets translated by the MMU into a physical address using mechanisms like page tables or segmentation.
267
physical address
The actual location in physical memory (RAM) where data or instructions are stored.
268
logical address space
range of all addresses generated by a program before translation into physical addresses
269
physical address space
The range of all physical memory addresses available in the system (RAM). Logical/virtual addresses are translated by the MMU into physical addresses. Only physical addresses are used to access actual data in hardware.
270
dynamic loading
A technique where a program loads routines (like functions or libraries) only when they are needed during execution. Saves memory and startup time.
271
static loading
A method where all routines are loaded into memory before the program starts running, whether they are used or not.
272
dynamic linked library (DLL)
AKA shared libraries. System libraries that are loaded once into memory and used by multiple programs, reducing memory usage and allowing easy updates.
273
contiguous memory allocation
A memory management scheme where each process is stored in a single continuous section of physical memory. Simple to implement but can lead to external fragmentation.
274
dynamic storage allocation
Assigning and freeing memory during a program’s runtime based on its needs. Context- used with first-fit, best-fit, worst-fit
275
memory protection
A mechanism that prevents processes from accessing memory they don’t own. Protects data integrity and system stability. Implemented using hardware (like the MMU, base and limit registers, or protection bits in page/segment tables).
276
variable partition and what it can lead to
A memory allocation method where each process gets a chunk of memory exactly the size it needs. Partitions vary in size → can lead to external fragmentation (scattered free space that’s too small to use).
277
dynamic storage-allocation problem
The challenge of satisfying a memory request of size n from a list of free memory holes. Requires choosing which hole to allocate, using strategies like: First-fit Best-fit Worst-fit Poor choices can lead to fragmentation and inefficient memory use.
278
first-fit allocation strategy
Allocate the first hole that is big enough for the process.
279
best-fit allocation strategy
Allocate the smallest hole that is still big enough for the process.
280
worst-fit allocation strategy
Allocate the largest available hole in memory
281
external fragmentation
Wasted memory between allocated blocks due to dynamic allocation and deallocation. Results in small, unusable free memory holes.
282
50-percent rule
A statistical finding that fragmentation may result in the loss of 50 percent of space.
283
internal fragmentation
Wasted memory inside an allocated block due to fixed-size allocation. Occurs when the memory assigned is larger than what the process needs.
284
partitioning
dividing memory into sections to manage different processes
285
fixed partitioning memory management and what it leads to
Memory is divided into fixed-size partitions (equal or unequal). Each partition holds only one process. Leads to internal fragmentation if the process doesn't fill the partition.
286
overlays
Overlays are a technique that allows a large program to run in limited memory by loading only the necessary parts (overlays) at a time. The programmer or OS is responsible for managing which code/data is loaded and when, swapping sections in and out of the same memory space. Saves memoryRequires careful planning and manual control in older systems
287
variable partition memory management scheme
A method where memory is allocated dynamically, varying in size based on process needs More flexible than fixed partitions Causes external fragmentation as memory becomes scattered with small, unusable gaps Requires compaction to reduce wasted space
288
compaction
A technique to eliminate external fragmentation by shifting processes in memory so free space is consolidated into one large block.
289
paging
A memory management scheme that avoids external fragmentation by dividing physical memory into fixed-size frames and logical memory into pages of the same size. Allows non-contiguous allocation while simplifying memory management.
290
frame
A fixed-size block of physical memory.
291
page
A fixed-size block of logical (virtual) memory in a paging system. Mapped to frames in physical memory.
292
page number (p)
Part of a logical address generated by the CPU; index into the page table. Tells the system which page of the process's virtual memory is being accessed. Combined with the page offset (d) to complete address translation.
293
page offset (d)
The offset within a page, used to find the exact physical address once the page number is mapped to a frame.
294
page table
A data structure used in paging to map virtual page numbers to physical frame numbers. Stored in memory by the OS. Each process has its own page table. Used by the MMU during address translation.
295
page-table base register (PTBR)
A special register that holds the starting address of the page table for the current process. Every memory access by the CPU uses the PTBR to find the page table and translate virtual addresses into physical ones.
296
page-table length register
A register that stores the size of the page table.
297
Valid-Invalid Bit
A bit stored in each page table entry that indicates whether a page is legal (valid) for the current process: Valid (1): The page belongs to the process and can be used Invalid (0): The page is not part of the process or is not currently in memoryHelps with memory protection by preventing illegal access.
298
frame table
Tracks which physical memory frames are in use, free, or assigned, along with other frame details.
299
huge pages
Huge pages are larger-than-normal memory pages used to reduce TLB overhead in paging systems. Typical size: 2MB or 1GB instead of 4KB. They reduce the number of page table entries and TLB misses, improving performance in memory-intensive applications.
300
translation look-aside buffer (TLB)
A small, fast hardware cache that stores recent page table entries (translations from virtual to physical addresses). Speeds up address translation by avoiding repeated lookups in the full page table. If the desired page is not in the TLB → TLB miss → page table must be consulted.
301
TLB miss
Occurs when a memory address is not found in the TLB, requiring a page table lookup.
302
segmentation
Memory is divided into variable-sized logical segments based on the program’s structure — e.g., code, stack, heap, data.
303
segmented paging
A hybrid memory management method that divides memory into segments, and then divides each segment into fixed-size pages. Know it’s a hybrid system (segmentation for logical structure, paging for physical mapping)
304
segment table length register (STLR)
A register that stores the number of segments a program uses. Used to verify that a segment number in a logical address is within bounds — prevents accessing invalid memory.
305
segment table base register (STBR)
A register that stores the starting address of the segment table in memory.
306
segment number
Part of a logical address that identifies which segment the address refers to.
307
segment offset
Part of a logical address that specifies the location within a segment.
308
segment table
A table storing details for each segment, including the base address and the limit (size).
309
segment base address
The starting physical address of a segment in memory.
310
swapped
A process that is moved between main memory and a backing store. Done to free up memory temporarily.
311
backing store
Secondary storage (e.g., hard disk or SSD) used to hold processes or pages when they’re swapped out of main memory.
312
swapping with paging
Instead of swapping the whole process, individual pages are moved between memory and the backing store.
313
page out
When a page is moved from main memory to the backing store (usually due to memory pressure).
314
page in
When a page is brought into main memory from the backing store because it’s needed.
315
flash memory
A type of nonvolatile storage. It has limited write cycles and space constraints.
316
virtual memory
Virtual memory allows programs to run even if they don’t fully fit in RAM, by using disk space as extra memory. It separates logical memory (what the program uses) from physical memory (actual RAM).
317
virtual address space
The logical view of memory a process sees — usually much larger than physical memory. Each process thinks it has its own private, contiguous memory, mapped to physical memory as needed.
318
sparse
A sparse address space is a virtual memory layout where only parts of the logical address space are used. There are large gaps (unused regions) between active segments like code, heap, and stack. Not caused by fragmentation It’s intentional and efficient, since unused parts aren’t mapped to RAM
319
demand paging
bringing in parts of a program from storage into memory only when they are needed If a page is not in memory when accessed → a page fault occurs, and the OS loads it from disk.
320
valid-invalid bit
a bit used in the page table to indicate whether a page is currently in memory (valid) or not (invalid)
321
page fault
Happens when a program tries to access a page not in memory — triggers the OS to load it from disk.
322
free frame
an available block of physical memory where a page can be loaded from secondary storage
323
effective access time (EAT)
EAT is the average time to access a page, factoring in both: memory access time (when the page is in RAM) page fault time (when the page must be loaded from disk)
324
over-allocating
Giving programs access to more virtual memory than there is physical memory.
325
page-replacement
When a page fault occurs and memory is full, the OS must choose a page to remove to make space for the new one. This process is called page replacement.
326
victim frame
The frame chosen to be replaced during page replacement.
327
modify bit (dirty bit)
A bit that shows whether a page in memory has been changed. If set, the page must be saved (written back to disk) before replacement. If not set, it can be replaced without saving.
328
page replacement algorithm
A strategy used when a page fault occurs and memory is full. Determines which page to remove to make space for the new one. Common algorithms: FIFO (First-In, First-Out) LRU (Least Recently Used) Optimal (Theoretical best) Clock (Approximation of LRU) Good algorithms reduce page faults and improve performance.
329
reference string
A list of page requests (like 1, 3, 2, 1, 4...) used to test and compare page replacement algorithms.
330
FIFO (First-In, First-Out) Algorithm
FIFO replaces the oldest loaded page when a new page needs to be brought into memory. Simple to implement (uses a queue), but doesn't consider how often or recently a page is used. Can lead to Belady’s Anomaly, where adding more frames causes more page faults.
331
optimal page replacement
Replaces the page that will not be used for the longest time in the future. Has the lowest possible page fault rate — but is not implementable in practice because it requires knowledge of future memory accesses. Used as a benchmark to compare real-world algorithms like LRU and FIFO.
332
belady’s anomaly
A weird situation where increasing the number of page frames leads to more page faults, not fewer.
333
LRU (Least Recently Used) Algorithm
Replaces the page that hasn’t been used in the longest time.
334
allocation of frames
Dividing physical memory into fixed-size blocks (frames) and assigning them to processes. Frame allocation = part of paging
335
fixed allocation
Each process is given a fixed number of frames, regardless of its size or needs. Simple and fair, but may lead to: Wasted memory for small processes Too few frames for large processes → more page faults
336
proportional allocation
Processes get a number of frames based on their size or need.
337
equal allocation
A frame allocation strategy where each process gets the same number of frames, regardless of size or needs. Easy to implement May cause under-allocation for large processes → page faults Wastes memory for small processes
338
global replacement
A page replacement strategy where a page can be replaced from the entire set of frames, not just the ones assigned to the current process.
339
local replacement algorithm
A page replacement strategy where a process can only replace pages from its own allocated frames. Prevents one process from affecting another’s memory use, but may cause thrashing if the process doesn’t have enough frames.
340
priority replacement algorithm
A page replacement strategy where the priority of the process determines which page is replaced. When a page fault occurs, the OS may choose to evict a page from a lower-priority process, preserving pages from higher-priority processes. Can cause starvation for low-priority processes.
341
thrashing
Happens when a system has too little memory, causing constant page faults and swapping. CPU is busy handling memory access instead of executing processes → leads to severe performance drop.
342
locality model
The idea that programs tend to access a small, predictable set of memory locations repeatedly over short periods. Two types: Temporal locality: Recently used items are likely to be used again soon Spatial locality: Nearby memory locations are likely to be accessed next Used to optimize caching, paging, and prevent thrashing.
343
working set
The set of pages a process has used recently (within a time window Δ). Represents the active memory a process needs to run efficiently. Used to decide how many frames a process should get → too few = thrashing.
344
working-set model and what it prevents
Tracks the set of recently used pages to estimate a process’s active memory needs. Used to reduce page faults and prevent thrashing by only keeping the process’s current needs in memory.
345
file
The smallest logical storage unit; a collection of related information defined by its creator.
346
text file
Sequence of characters organized into lines/pages
347
source file
Contains program code, not compiled. Sequence of functions, each further organized as declarations and executable statements
348
executable file
Contains compiled code, ready to run. Series of code sections, loader can bring into memory for execution.
349
extended file attributes
Metadata beyond standard info (e.g. encoding, checksum).
350
seek
Changes the current file-position pointer
351
hard links
Multiple filenames pointing to the same inode (same data on disk). Deleting one doesn’t remove the data unless all links are deleted.
352
sequential access
A file-access method in which contents are read in order, from beginning to end.
353
direct access
A file-access method that allows data to be read or written in any order.
354
logical records
Structured pieces of file content, typically fixed-length, used in direct-access files.
355
relative block number
An index relative to the beginning of a file. Block 0 is the first, Block 1 is the second, and so on.
356
allocation problem
The OS’s task of deciding where on the disk to store the blocks of a file.
357
index (in file access)
An access method built on top of direct access that uses pointers to locate data blocks quickly.
358
7 types of file attributes
NameIdentifierTypeLocationSizeProtectionTimestamps and user identification Mnemonic: NIT-LSPT
359
user file directory (UFD)
What it is: A per-user file directory; each user has their own. Context: If an OS supports multiple users, each user gets their own UFD. You’d see this: In a two-level directory system: MFD → Kaitlin's UFD → your files
360
main file directory (MFD)
Points to each user's UFD in a two-level directory system. What it is: The master index — it maps users to their UFDs. Context: Instead of one massive global directory, this keeps users’ files private and separated.
361
acyclic graph
What it is: A file structure where no directory links back to itself (no cycles/loops). Context: Prevents infinite loops when traversing directories — critical for ls -R or recursive delete operations.
362
line (in file naming)
A file that has no contents but rather points to another file.
363
resolve
to follow a link and find the target file Imagine you're using a shortcut on your desktop called Resume. That shortcut isn’t the file — it just points to the real file stored deep in Documents/JobStuff/Resume2025.pdf. When you double-click it, your computer has to: Follow the shortcut (the link) Find the actual file (the target) Open the target file That whole process is called resolving the link.
364
Linux command: List the contents of the current directory
ls
365
Linux command: Change the current working directory
cd
366
Linux command: Show the full path of the current directory
pwd
367
Linux command: Show the contents of one or more files
cat
368
Linux command: Create a .zip archive
zip
369
Linux command: View file contents one screen at a time
less
370
Linux command: Show the first lines of a file
head
371
Linux command: Show the last lines of a file
tail
372
Linux command: Compare two files line by line and show differences
diff
373
Linux command: Check if two files are identical
cmp
374
Linux command: Compare two sorted files and show common/unique lines
comm
375
Linux command: Sort the lines of a file or input
sort
376
Linux command: Copy or convert data at a low level (e.g., disk cloning)
dd
377
Linux command: Change file or directory permissions
chmod
378
Linux command: Change the owner of a file or directory
chown
379
Linux command: Create a new user account
useradd
380
Linux command: Modify an existing user account
usermod
381
Linux command: Change a user’s password
passwd
382
Linux command: Show basic system info (kernel, OS, etc.)
uname
383
Linux command: Show the username of the current user
whoami
384
Linux command: Show the full path to a command’s binary/source/man page
whereis
385
Linux command: Get a one-line description of a command
whatis
386
Linux command: View the manual for a command
man
387
Linux command: Show disk space usage for all file systems
df
388
Linux command: Show disk usage for files and directories
du
389
Linux command: Attach a file system to the file system tree
mount
390
Linux command: Display currently running processes
ps
391
Linux command: Show live processes and system resource usage
top
392
Linux command: Terminate a process by its PID
kill
393
Linux command: Terminate all processes with a specific name
killall
394
Linux command: Securely connect to a remote system
ssh
395
Linux command: Show network interfaces and IP addresses
ifconfig
396
Linux command: Trace the path network packets take
traceroute
397
Linux command: Download a file from the internet
wget
398
Linux command: Configure firewall with a user-friendly tool
ufw
399
Linux command: Configure the firewall with detailed rules
iptables
400
Linux command: Run a command as superuser (admin privileges)
sudo
401
Linux command: Start or stop system services
service
402
Linux command: Manage software packages (install, update, remove)
apt, pacman, yum, rpm
403
Linux command: Print a message or the value of a variable
echo
404
Linux command: Clear the terminal screen
clear
405
Linux command: Display a calendar in the terminal
cal
406
Linux command: Create a shortcut for a command
alias
407
Linux command: Set or export environment variables
export
408
port
A connection point where a device communicates with a computer system — for example, a serial port.
409
PHY
Shorthand for the OSI model physical layer; used more commonly in data-center terminology to refer to ports.
410
daisy chain
A device connection setup where device A plugs into B, B into C, and C into the computer — forming a chain that acts like a bus.
411
PCIe bus
A high-speed system bus used in PCs to connect the processor and memory to fast I/O devices like graphics cards and SSDs. Uses multiple lanes to transfer data in both directions at once (full-duplex).
412
expansion bus
A bus designed for slower peripherals like keyboards and USB ports. It extends the capabilities of the main system bus by allowing more (and slower) devices to connect.
413
serial-attached SCSI (SAS)
A type of storage-specific bus that connects multiple high-speed storage devices (like hard drives) together, commonly used in servers and enterprise storage solutions.
414
controller
A device (or chip) that acts as a middleman between the CPU and a peripheral. For example, a disk controller manages read/write commands between storage and memory.
415
fibre channel (FC)
A high-speed serial communication standard used mainly in enterprise storage networks (SANs). Known for its low-latency, high-reliability data transfer, especially in servers and data centers.
416
host bus adapter (HBA)
A specialized device controller that connects a computer to a network or storage system via a bus. It usually has its own processor and memory to handle complex protocols like fibre channel.
417
memory-mapped I/O
A method where device control registers are mapped into the same memory space as RAM. This allows the CPU to use regular memory instructions (like load and store) to read/write to I/O devices — instead of using special I/O instructions. ✅ Why it matters: It's faster, especially for high-volume tasks like sending screen data to a graphics controller. 📌 Modern systems mostly rely on memory-mapped I/O because it simplifies programming and improves performance.
418
data-in register
A register that holds input from the device, which the CPU reads. 📌 Example: When you type on a keyboard, the keystroke ends up in this register for the CPU to read.
419
data-out register
A register the CPU writes to in order to send data to the device. 📌 Example: To display a letter on screen, the CPU writes that character to the display’s data-out register.
420
status register
A read-only register that shows the current state of the device. 🧠 Bits inside this register might tell you: Is the device done? Is there data available? Was there an error?
421
control register
A write-only register the CPU uses to send commands or settings to a device. 📌 Example: Set communication mode (full vs. half duplex), enable error-checking, or choose data speed.
422
clear
To write a 0 into a bit. Commonly used to reset flags like the busy or error bits in registers.
423
set
To write a 1 into a bit. Used to activate or enable status flags, like setting the busy bit when the controller is working.
424
busy-waiting (polling)
Also known as polling, this is when the host repeatedly reads a register (usually the status register) in a loop until a bit (like busy) changes.
425
programmed I/O (PIO)
A method where the CPU manually transfers data to and from I/O devices one byte at a time, checking the status bits for readiness.
426
direct-memory-access (DMA)
A technique using a dedicated controller to transfer data directly between memory and devices without CPU intervention.
427
scatter-gather
A DMA method that handles non-contiguous memory blocks in a single transfer command.
428
double buffering
A technique where data is copied first to kernel memory and then to user memory, requiring two copies.
429
DMA-request
A signal from the device controller indicating data is ready for DMA transfer.
430
DMA-acknowledge
A signal from the DMA controller telling the device to transfer data after it has seized the memory bus.
431
cycle stealing
When the DMA controller temporarily takes control of the memory bus, preventing the CPU from accessing memory.
432
direct virtual memory access (DVMA)
A type of DMA using virtual addresses that are translated to physical addresses.
433
Interrupt
A signal from a device or program that tells the CPU it needs attention.
434
Interrupt Request Line
A wire the CPU checks after each instruction to see if a device needs service.
435
Interrupt Handler (ISR)
A special function the CPU runs to respond to an interrupt.
436
Non-Maskable Interrupt
A critical interrupt that the CPU can’t ignore (e.g., hardware failure).
437
Maskable Interrupt
A device interrupt that the CPU can temporarily ignore during critical processing.
438
Interrupt Vector Table
A table storing addresses of handlers for different types of interrupts.
439
Interrupt Chaining
A technique where one interrupt vector entry points to a list of handlers for multiple devices.
440
Trap
A software interrupt, used to request OS services (e.g., system calls).
441
STREAMS
A UNIX I/O system that builds pipelines from modules, making device and network I/O more modular and flexible.
442
Stream Head
The part that connects STREAMS to the user process and handles system calls like read() and write().
443
Stream Modules
Loadable components between the stream head and driver that add or modify functionality.
444
Driver End
The component that connects to the actual hardware device and handles low-level I/O.
445
Encapsulation (in STREAMS)
Wrapping data in headers or layers as it moves through modules—helps structure and manage message flow.
446
Flow Control
A way to manage data movement and prevent buffer overflows by pausing sending if buffers are full.
447
What do write() and putmsg() do in STREAMS?
They send data into the STREAM, pushing it from one module to the next until it reaches the device.
448
What do read() and getmsg() do in STREAMS?
They retrieve messages from the stream head's read queue to deliver to the user process.
449
Does STREAMS support asynchronous I/O?
Yes—except at the stream head, where blocking may occur due to flow control.
450
What’s one major benefit of STREAMS?
Reusable modules can be loaded/unloaded for different devices or protocols using ioctl().
451
need to fix
left- cylinder top- sector down- spindle down again- track
452
Transfer Rate
The speed at which data flows between the drive and the computer.
453
Positioning Time / Random-Access Time
The total time to access data, including seek time and rotational latency.
454
Seek Time
The time needed to move the disk arm to the correct cylinder.
455
Rotational Latency
The time it takes for the disk to rotate so the correct sector is under the read/write head.
456
Head Crash
When the disk head touches the platter, damaging the surface and usually destroying the disk.
457
Drive Writes Per Day (DWPD)
How many full drive writes can be done daily before a NAND drive is expected to fail.
458
Flash Translation Layer (FTL)
A mapping table in NAND flash that tracks which physical pages store valid logical blocks and which can be erased.
459
Garbage Collection
The process of copying valid data elsewhere so blocks with invalid data can be erased and reused.
460
Over-Provisioning
Reserving extra space (e.g., 20%) to improve performance and ensure there are blocks available for writing when the main space is full.
461
Wear Leveling
A technique that spreads writes evenly across all blocks to prevent some from wearing out faster than others.
462
Error-Correcting Codes (ECC)
Data stored with user data to detect and correct read/write errors.
463
RAM drives
Sections of system DRAM set up by device drivers to act like storage devices, often used with file systems for fast, temporary storage.
464
serial ATA (SATA)
The most common bus connection method is SATA.
465
Logical blocks
The smallest unit of transfer used by storage devices and addressed by the OS
466
Constant Linear Velocity (CLV)
A method where the disk spins faster or slower so that data moves under the head at a constant rate.
467
Constant Angular Velocity (CAV)
A method where the disk spins at a constant speed, so inner tracks have fewer sectors than outer tracks.
468
FCFS disk scheduling
Serves disk I/O requests in the order they arrive, without reordering.
469
SCAN (elevator) disk scheduling
The disk arm moves in one direction, serving all requests along the way, then reverses direction at the end. It reduces overall seek time by avoiding random jumps across the disk.
470
C-SCAN disk scheduling
Instead of reversing direction like SCAN, C-SCAN returns to the start and begins a new pass in one direction only. It provides more uniform wait times by treating all requests in one direction equally.
471
bandwidth
The total amount of data transferred divided by the total time between the first request for service and the completion of the last transfer.
472
SSTF (Shortest Seek Time First)
An algorithm that selects the I/O request closest to the current head position to minimize seek time. It can cause starvation for far-away requests if closer ones keep arriving.
473
LOOK Scheduling
A variant of SCAN where the disk arm only goes as far as the last request in each direction, then reverses—it doesn’t go all the way to the disk ends if no requests are there. It reduces unnecessary movement by “looking ahead” and only servicing where requests exist.
474
C-LOOK disk scheduling
A variant of LOOK where the disk arm only moves in one direction (like C-SCAN), but only goes as far as the last request, then jumps back to the beginning—skipping empty areas. It reduces seek time and gives more uniform wait times by avoiding unnecessary arm movement and skipping unused regions.
475
Parity bits
Detect single-bit errors by checking whether the number of 1s in a byte is odd or even.
476
Cyclic redundancy checks (CRCs)
CRCs detect multiple-bit errors using a hash function to check for data changes, commonly used in networking.
477
Error correction codes (ECC)
ECC detects and corrects errors using extra stored bits and algorithms, often used in memory and storage systems.
478
Error detection
The process of identifying if a problem like bit corruption has occurred.
479
Checksums
The general term for an error detection and correction code.
480
soft error
An error that is recoverable by retrying the operation.
481
hard error
An unrecoverable error (possibly resulting in data loss).
482
Sector
The smallest unit of data storage on a disk drive, such as a hard drive or SSD.
483
page in storage
The smallest unit of data that can be transferred between RAM and a storage device during virtual memory operations.
484
low-level (physical) formatting
The initialization of a storage medium to prepare it for use as a computer storage device.
485
partition
A logical division of storage space, such as a group of contiguous cylinders on an HDD.
486
mounting
Making a file system available for use by logically attaching it to the root file system.
487
volume
A container of storage that holds a mountable file system (can be a physical device or a file-based image).
488
logical formatting
The creation of a file system in a volume to make it ready for use.
489
cluster
In Windows storage, a power-of-2 number of disk sectors collected for I/O optimization.
490
raw disk
Direct access to a secondary storage device as an array of blocks with no file system.
491
bootstrap
The sequence of steps a computer takes at power-on to initialize hardware and start the OS.
492
Boot Disk / System Disk
A device with a boot partition and kernel that can start the OS during the boot process.
493
boot partition
A storage device partition containing an executable operating system.
494
Master Boot Record (MBR)
Boot code and partition table stored in the first sector of a boot partition.
495
boot sector
The first sector of a boot device that contains the bootstrap code.
496
bad block
An unusable sector on a hard disk.
497
sector sparing
Replacing a bad HDD sector with a spare sector from elsewhere on the device.
498
sector slipping
The renaming of sectors to avoid using a bad sector.
499
host-attached storage
Storage accessed through local I/O ports, directly attached to a computer (not over a network or SAN).
500
Fibre Channel (FC)
A high-speed storage I/O bus used in data centers to connect servers to storage arrays.
501
Network-Attached Storage (NAS)
Storage accessed over a network by computers, often using NFS, CIFS, or iSCSI protocols.
502
iSCSI
A protocol that carries SCSI commands over IP networks, allowing block-level storage access remotely.
503
Storage-Area Network (SAN)
A private local network that connects multiple computers to shared storage devices.
504
InfiniBand (IB)
A high-speed network link designed for fast communication between servers and storage systems.
505
file systems
Provide efficient, structured access to storage, enabling data to be stored, located, and retrieved.
506
blocks
Units of data transfer between memory and mass storage; contain one or more sectors.
507
sectors
Subdivisions of a block, typically 512 or 4,096 bytes on a disk.
508
layered file system
A structure where higher levels use features of lower levels to offer abstraction and modular design.
509
I/O control
Manages low-level device communication and interrupt handling; contains device drivers and interprets commands.
510
basic file system
Translates generic file operations to device-specific commands; manages buffers, caches, and block I/O.
511
file-organization module
Knows logical block layout of files and manages allocation using the free-space manager.
512
logical file system
Handles metadata, file permissions, symbolic names, and protection; uses file-control blocks.
513
file-control block (FCB)
Contains metadata like file ownership, permissions, and location (inode in UNIX).
514
UNIX file system (UFS)
An early UNIX file systems; uses inodes for FCB.
515
extended file system
The most common class of Linux file systems, with ext3 and ext4 being the most commonly used file system types.
516
boot block
A block of code stored in a specific location on disk with the instructions to boot the kernel stored on that disk. The UFS boot control block.
517
partition boot sector
NTFS version of the boot control block; contains boot code and data to launch the OS.
518
volume control block
A per-volume storage block containing data describing the volume.
519
superblock
The UFS volume control block.
520
master file table
The NTFS volume control block.
521
mount table
An in-memory data structure containing information about each mounted volume. It tracks file systems and how they are accessed.
522
system-wide open-file table
Kernel structure storing metadata (FCB, open count, access mode) for every open file in the system.
523
per-process open-file table
Per-process structure with pointers to system-wide open-file entries
524
file descriptor (fd)
UNIX open-file pointer, created and returned to a process when it opens a file.
525
file handle
Windows name for the open-file file descriptor.
526
Free-space list
A file system data structure that tracks which blocks on the disk are free (unallocated) and available for new files or directories.
527
Bitmap
A sequence of binary digits (bits) used to track availability of resources—typically disk blocks. A 1 usually means available, and 0 means in use, though this may vary by implementation.
528
Bit vector
A type of bitmap where each bit represents a disk block. A 0 usually indicates the block is in use, and a 1 means the block is free. Efficient for finding the first free block using bitwise operations.
529
Combined Scheme (UFS)
A hybrid allocation method used in UNIX-based file systems: Starts with 12 direct blocks in the inode for small files. Then uses: Single indirect block → points to data blocks Double indirect block → points to blocks of pointers to data blocks Triple indirect block → adds another layer (pointers to double indirect blocks) Efficient for both small and large files by combining direct and indirect referencing.
530
extent
A contiguous chunk of disk space added to a file to allow growth, reducing external fragmentation.
531
linked allocation
Each file is a linked list of disk blocks scattered across the disk; good for sequential access, poor for random access.
532
file-allocation table (FAT)
A table used in MS-DOS that maps which blocks belong to which files; stored at the beginning of the disk.
533
indexed allocation
Stores all block pointers for a file in an index block, allowing direct access to any block.
534
index block
A special block that holds pointers to the data blocks of a file in indexed allocation.
535
mount point
A directory in the existing file system where a new file system is attached when it is mounted. 📁 Example: Mounting a USB drive at /mnt/usb or /home makes its contents accessible there.
536
bootstrap loader
The small program that loads the kernel as part of the bootstrap procedure.
537
dual-booted
A computer that can boot one of two or more installed operating systems.
538
root partition
The storage partition that contains the kernel and the root file system; mounted at boot time.
539
New Technology File System (NTFS)
Microsoft-designed file system, successor to FAT32. Supports 64-bit volumes, journaling, and file-based compression.
540
EXT2
Second extended file system. No journaling. Recommended for flash drives/USBs. Max file size: 2TB.
541
EXT3
Third extended file system. Adds journaling to EXT2, reducing corruption risk. Max file size: 2TB.
542
EXT4
Fourth extended file system. Supports large file sizes (up to 16TB) and new features like multiblock allocation, delayed allocation, and journal checksums.
543
Master File Table (MFT)
NTFS’s version of inodes. Contains file records, organized as a B-Tree, stored in a file, and managed like a regular file.
544
Metafiles (NTFS)
Special files in NTFS treated like regular files. Examples: log file, volume file, attribute definition file, bitmap, boot file, bad cluster file, root directory.
545
Volume Bitmap
A metafile that tracks free space. Can grow dynamically as needed.
546
File Record (in MFT)
Entry with attributes like file name, date, permissions. Can store small files directly or pointers to data blocks for large files.
547
Data Streams (NTFS)
Files can have multiple data streams. Default is the main stream. Extra streams are rarely used.
548
Directories (NTFS)
Files that hold names and references. Large directories use B+ trees and store redundant data like size and timestamps for fast searching.
549
EXT2/EXT3/EXT4 File Systems
Linux file systems. EXT2 has no journaling, EXT3 adds journaling, EXT4 adds large file support and performance features.
550
breach of confidentiality
unauthorized reading or stealing of information
551
breach of integrity
unauthorized modification, alteration, or tampering of data
552
breach of availability
unauthorized actions that destroy system access or data availability.
553
theft of service
unauthorized use of resources
554
denial-of-service (DoS)
preventing legitimate use of the system
555
masquerading
pretending to be someone else to gain unauthorized access
556
replay attack
repeating a valid data transmission to trick a system
557
man-in-the-middle attack
when an attacker secretly intercepts and alters the communication between two parties
558
session hijacking
taking control of a communication session between two parties
559
privilege escalation
gaining more privileges than a person or system should have
560
malware
Software created to harm, exploit, or take control of computer systems.
561
logic bomb
Malware that activates only when specific conditions are met.
562
virus
Self-replicating code that attaches to files or programs and can cause damage.
563
worm
Self-replicating malware that spreads across networks without user action.
564
secure by default
Describes a system that starts with settings minimizing its attack surface for better security.
565
zombie systems
Compromised computers secretly controlled by attackers, often used in coordinated attacks.
566
sniffing
An attack where the attacker monitors network traffic to steal sensitive data.
567
spoof
Faking a legitimate identity (like an IP address) to deceive users or systems.
568
distributed denial-of-service attack (DDoS)
An attack from many sources (often zombie systems) to overwhelm and shut down a targeted service.
569
mechanism
The how — a low-level method for implementing behavior in a system (e.g., how access control is enforced).
570
policy
The what — a high-level decision about what should be done (e.g., who is allowed access).
571
principle of least privilege
Programs, users, and systems should be given only the minimum privileges necessary to perform their tasks — reducing damage from both errors and attacks.
572
permissions
Access controls that can block malicious actions by limiting what users and programs are allowed to do. They act like an immune system for the OS.
573
compartmentalization
Isolating system components using permissions and restrictions so if one part is compromised, the rest stays protected — like having separate locked rooms.
574
audit trail
A record in system logs tracking access attempts. Helps detect attacks, trace their paths, and assess damage after an incident.
575
defense in depth
A layered security approach: multiple barriers (like a wall, moat, and guards) make it harder for attackers to reach the core — even if one layer fails.
576
privilege separation
A system design that divides functionality into different privilege levels to enhance security — e.g., kernel vs. user processes.
577
protection rings
A layered model that separates privileges using concentric "rings" — inner rings have higher privilege, outer rings have less.
578
ring 0
The innermost protection ring with highest privilege, used for the kernel and core OS functions.
579
ring 3
An outer protection ring with lowest privilege, used for regular user processes.
580
hypervisor
Also called virtual machine managers. They run in ring -1 (Intel), managing and isolating guest OSes with more privilege than ring 0.
581
TrustZone (TZ)
ARM's most privileged execution environment. Used for secure operations like cryptography, available in ARMv7+ processors.
582
Secure Monitor Call (SMC)
A special instruction used in kernel mode to interact with TrustZone — like a system call, but only for secure services.
583
protection domains
Logical groupings of resources and permissions that define what actions a process can perform on what objects, helping enforce access control and reduce risk.
584
Objects (in protection domains)
System resources that can be accessed — either hardware objects (CPU, printers, disks) or software objects (files, programs, semaphores).
585
software objects
Files, programs, semaphores — abstract data types that are accessed through defined operations.
586
need-to-know principle
A process should only access the specific objects it currently needs to complete its task — nothing more. Helps limit damage if the process fails or is compromised.
587
need-to-know vs. least privilege
Need-to-know = the policy (what access is needed) Least privilege = the mechanism (how the system enforces it)
588
access right
A permission defined as an ordered pair: Example: — allows reading and writing to file F, but nothing else.
589
static protection domain
A domain where the set of accessible resources remains fixed during the lifetime of a process. Simpler but less flexible and may violate the need-to-know principle.
590
dynamic protection domain
A domain where the set of accessible resources can change during execution. More complex but supports the need-to-know principle more accurately.
591
domain switching
The process of changing from one protection domain to another during execution, allowing more fine-grained control of access rights.
592
access list
A list tied to an object that defines which domains can perform what operations on it. If a domain isn’t listed, access is denied unless allowed by a default rule.
593
capability list
A list tied to a domain, detailing all the objects it can access and what operations are allowed. Managed by the OS to ensure secure use.
594
capability
A secure token or key representing access rights to an object. Possessing it grants permission. Can’t be modified or forged by user processes.
595
Role-Based Access Control (RBAC)
A security model where access is based on user roles. Ensures users only have the permissions they need to perform their tasks — aligned with the principle of least privilege.
596
Mandatory Access Control (MAC)
A strict, system-wide security model that uses labels to control access, even overriding superuser privileges. Enforces global access policies.
597
roles in RBCA
Groups of permissions assigned to users or processes. Users perform specific tasks by assuming the roles that carry the required privileges.
598
Discretionary Access Control (DAC)
Access is controlled by the resource owner.
599
MAC labels
Security tags assigned to users and resources (e.g., “secret”, “unclassified”) that determine access levels according to strict policies.