computer-systems-flashcards
(75 cards)
Operating System
An operating system provides an interface between user applications and hardware, allowing applications to request system-level services. It is responsible for managing computer resources such as the CPU, memory, and I/O devices. The OS supports multiprogramming, security through isolation, and enables running more processes with large memory. It switches to kernel mode to execute privileged instructions when a user application requests services via system calls.
Process
“A process is an executing instance of a program, including its code, data, and allocated resources. Key characteristics of a process include having its own address space (its own memory), maintaining a program counter (indicating the next instruction), and having an execution state (e.g., running, ready, blocked). Processes are isolated from each other for security.”
Process Components
The components of a process include the Code (text segment), which contains the program’s instructions and is typically read-only and shareable among instances of the same program. The Data segment holds global and static variables. The Stack segment is used for function calls, local variables, and return addresses, growing and shrinking during execution. The Heap segment is used for dynamic memory allocation.
Process States
The different states a process can be in are: New (process is being created), Ready (process is waiting for CPU time), Running (process is executing instructions), Blocked (Waiting) (process is waiting for an I/O operation to complete), and Terminated (process has finished execution or been forcibly stopped).
System Calls
System calls provide an interface between user programs and the operating system, allowing applications to request system-level services. They are necessary because user applications cannot directly access hardware for security and stability reasons.
System Calls for Process Management
These system calls manage the creation, execution, and termination of processes. Examples include Fork() (creates a new child process), Exec() (replaces a process’s memory with a new program), Wait() (makes a process wait for its child process to finish), Exit() (terminates a process), Getpid() (retrieves the process ID), and Kill() (sends a signal to control or terminate a process).
Process Control Block (PCB)
The Process Control Block is a data structure used by the operating system to store information about a process. Examples of data stored in the PCB include the process ID, process state, parent process, memory management information, file descriptors, priority, and used CPU time.
Process Table
The process table is a list containing the Process Control Blocks (PCBs), with one PCB for each process.
Threads
Threads provide an execution context to a process, enabling the sequential execution of a set of instructions within that process. A thread has its own program counter, stack pointer, registers, and stack. Threads allow multiple tasks within a single process to run concurrently. Thread creation is significantly faster than process creation (10-100 times faster), and threads efficiently share memory and open files within a process.
Thread Control Block (TCB)
The Thread Control Block stores the context data for a thread, similar to how the PCB stores context data for a process. Examples of data stored in a TCB include the thread ID, stack pointer, program counter, register values, state (e.g., running, blocked, ready), and a pointer to the PCB of the process the thread belongs to.
PThreads
PThreads is a POSIX standard API for creating and synchronizing threads. Most UNIX systems support it, and functions in this API typically start with pthread.
CPU Scheduling
CPU scheduling is the process of determining which process or thread among those that are ready gets to run next on the CPU, especially in multiprogramming environments where only one can run at a time on a single-core CPU. A scheduling algorithm determines this choice.
CPU-bound process
CPU-bound processes have long CPU bursts and spend the majority of their time performing computations.
I/O-bound process
I/O-bound processes have short CPU bursts and spend most of their time waiting for I/O operations to complete.
Batch Systems Scheduling Goals
The primary goals of scheduling in batch systems, which run large jobs without much user interaction, are usually to maximize throughput (complete as many jobs as possible) and minimize turnaround time (reduce the total time from submission to completion).
Interactive Systems Scheduling Goal
In interactive systems, the main goal of scheduling is to minimize response time, which is the time between a user issuing a command and getting a result.
Non-pre-emptive scheduling
In non-pre-emptive scheduling algorithms, once a process is scheduled to run, it continues executing until it finishes or blocks (e.g., waiting for I/O).
First Come First Served (FCFS)
First Come, First Served (FCFS) is a non-pre-emptive scheduling algorithm that runs processes in the order in which they become ready to execute. It is simple to implement. A disadvantage is the convoy effect, where short processes can be delayed by a long CPU-bound process ahead of them in the queue, potentially reducing resource utilization. Starvation can occur if a ready process is entirely CPU-bound.
Shortest Job First (SJF)
Shortest Job First (SJF) is a non-pre-emptive scheduling algorithm that selects the job with the least amount of work to do (shortest CPU burst) until its next I/O request or completion.
Round Robin (RR)
Round Robin (RR) is a pre-emptive scheduling algorithm where each process is given a fixed time quantum to run. If a process is still running when its quantum expires, it is pre-empted and moved to the back of the ready queue. With a reasonable quantum, it is generally good for response time but not necessarily for turnaround time. It can favor CPU-bound processes over I/O-bound ones. There is overhead associated with context switching between processes. Starvation is not possible in Round Robin.
Priority Scheduling
Priority scheduling assigns a priority to each job and allocates the CPU to the highest-priority process that is ready to run. If there are multiple processes with the same highest priority, another scheduling algorithm (like FCFS or RR) might be used among them. A potential issue is starvation for lower-priority processes if high-priority processes constantly enter the system.
Memory Management
Memory management is concerned with managing the computer’s memory. Its purposes include supporting multiprogramming (loading multiple processes into memory for better CPU utilization), providing security through isolation between processes and the OS, and enabling the system to run more processes, even those requiring large amounts of memory.
Memory Hierarchy
Memory in a computer system is structured in a hierarchy based on speed and size, with faster but smaller memory at the top and slower but larger memory at the bottom. The hierarchy, from fastest to slowest, is: Registers (built into CPU, extremely fast, very small), Cache Memory (L1, L2, L3) (fast memory close to CPU, temporarily stores frequently used data), RAM (Random Access Memory) (main memory, holds active programs and data), Disk Storage (SSD/HDD) (long-term storage, slower), and Virtual Memory (swap space on disk) (extends RAM using disk, much slower due to disk access).
Virtual Memory
Virtual memory is an operating system technique that extends the available RAM by using disk space (swap memory). It provides a logical address space to each process, which is mapped to physical addresses by the OS using page tables. This abstraction allows each process to believe it has access to the entire memory space, while the OS ensures fair allocation. It prevents programs from exceeding physical memory limits and facilitates multitasking.