Programs, Processes, Processors, and OS Flashcards

1
Q

Concurrent Programming
The three central concepts in concurrent programming

A
  • Processor
    • Hardware device that executes machine instructions
  • Program
    • Instruction sequence defining potential execution path Passive description of what you would like to happen
    • Stored on disk/secondary memory
  • Process
    • Active system entity executing associate program(s) or; Program in execution on a processor
  • Resides in primary memory, removed on reboot
    (We’ll assume process, thread, and task are equivalent for now)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Processes and Programs
Why is a process not a program?

A

1) Program may be executed by multiple processes at the same time
– Wordpad opened twice as separate processes of the same program
– 1 program, multiple processes

2) Process might run one program, and then another
– gcc pre-processor is a different program (executable file) from the compilers syntax analyser and code generator
– The pre-processor, syntax and analyser, and code generator are executed one after the other by the same process
– 1 process, multiple programs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Processes

A

Processes abstract over…

1) Processors
– Can have multiple processes regardless number of processors

2) “Disturbed” sequential execution
– Interrupt processing: Handling external events (keypresses, clock ticks)
– Context switch: Execution jumps to another part of memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Context Switching

A

Need to store context information for a switched (What is a thread’s Context?) -out thread so can later resume exactly where it left off

A thread’s context (minimally) comprises
– Program Counter (PC): Instruction in the program (or function) that the thread will execute next
– Stack Pointer (SP): Address at the top of the thread’s call stack. The call stack remembers the sequence of function calls the thread is making as it executes program (each thread needs own stack)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

(Three-step context-switching sequence)

A
  1. De-schedule currently-running thread
    –Save PC and SP CPU registers of current running thread
    –Required so that thread resumes execution exactly where left off
  2. Scheduler selects ‘best’ ready thread to run next
    – Time-slicing, priority, starvation
    – Hardware architecture, etc.
  3. Resume newly-selected thread
    – Restore register contents back to PC & SP registers
    – Thread resumes where it last left off (PC is loaded last)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Concurrency vs. Parallelism

A

– Processes are always concurrent
– …but are not always parallel

  • Parallel
    • Multiple (n > 1) processes executing simultaneously. All executing at a given instant.
  • Concurrent
    • n > 1 processes are underway
      simultaneously. < n may execute at given instant
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which processes finishes first?

A

We cannot know, the answer might change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

“Safe” Concurrency
When do we not have to worry about concurrency?

A

1) No shared data or communication
2) Read only data (constant)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Risky Concurrency: Contention for Resources
When should we worry about concurrency?

A
  1. Threads use a shared resource without synchronization
  2. One or more threads modify the shared resource
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Concurrency Context: Leveraging Hardware and Software

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Leveraging Hardware and Software

A

Multi-core processors: Each core can run a thread concurrently with other cores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Leveraging Hardware and Software 2

A

Knowing concurrency is a requirement for efficient multi-thread and multi-core systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Limit to Performance: Amdahl’s law

A

Program speedup by adding more processors is limited by the serial part of the program

e.g. If 95% of program (its runtime) can be parallelized, theoretical maximum speedup is x20

1/(1-p) where p is proportion of the program that can be parallelized.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Complexities of Concurrency

A

– Accidental complexity
– Low-level APIs
– Tedious, error-print, time consuming
– Non-portable
– Limited debugging tools
– Actual behaviour vs. debug environment
– Lack of tools to identify and rectify race conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Some System Calls for Unix Processes

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Unix Process Hierarchy

A
  • Every Unix process (except one!) has a parent
  • Processes may create multiple children (via fork())
  • Example: The (traditional) Unix boot procedure
    • Single parentless process runs (init)
    • init reads a file which lists all connected terminals
    • init forks a login process for each terminal
    • If a login process validates a user, it forks a shell process…
17
Q

Unix-style Processes

A

Unix processes combine a thread of execution with a ‘private virtual address space’
- Each process is like a ‘virtual machine’: CPU and memory combined in single abstraction
- User processes are protected from another
- Good to avoid corruption and security problems
- Cannot therefore straightforwardly share memory
- Processes are heavyweight and costly to switch - as well as thread context (PC, SP, flags etc)

18
Q

Threads

A

Threads share a common address space
- Can easily cooperate on common tasks
- Lighter weight switch; support fine-grained concurrency

19
Q

Threads: User vs Kernel vs. Language

A

User Threads
- Supported in user-level library
- Kernel only knows about processes
- Scheduled by a per-process, user-level scheduler

Kernel threads
- Supported by OS
- Scheduled by OS scheduler
- Threads are preemptible by the OS
- Potentially more concurrent, especially on multicore
- More predictability useful for real-time applications
- OS itself is multithreaded
- Supported in almost all modern OS

Threads in language runtime environments (Java)
- Available only in particular language environments
- Might be implemented either as user or kernel threads

20
Q

User vs. Kernel Threads

A

User thread Advantages
- No OS support required
- Cheap context switch
- Typically order of magnitude faster than kernel thread switch
- (Which itself is an order of magnitude faster than process switch…)
- Easy to offer per-application scheduling policies

User thread disadvantages
- If thread makes a system call that blocks, all other threads in that process also block (all reduce to the one underlying process)
- User threads in same process can’t execute on separate CPUs on multicore – as the OS is not user-thread aware

21
Q

Each Java App. Launches a JVM

A