Midterm Flashcards

(90 cards)

1
Q

Serial

A

Runs one program or process at a time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Are CPU cores serial or parallel

A

Serial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Von Neumann Architecture

A

CPU and Main memory connected through interconnect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is main memory?

A

Collection of locations and their contents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Parts of CPU

A

Control Unit and ALU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Arithmetic and Logic Unit purpose (ALU)

A

Executes actual instructions (worker)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Control Unit Purpose

A

Decides which instruction within a program should be executed (boss)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Von Neumann Bottle Neck Commands

A

Write/Store and Fetch/Read

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Process

A

Instance of computer program being executed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Process Control Block (PCB)

A

Data structure where several components of process managed by OS are stored

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is multitasking

A

illusion of running multiple programs at the same time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Threading

A

Used to divide up work into independent tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Caching

A

Placing small fast memory near the CPU and trying to keep data we will need in there as much as possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What helps prevent Von Neumann bottle neck

A

Caching

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Principle of locality

A

How cache keeps what we need, spatial or temporal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Temporal Locality

A

Recently accessed data is kept in the cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Spatial Locality

A

Storing data that is in the proximity of recently accessed data in the cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Van Neumann Bottle Neck

A

Separation of memory and CPU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Cache Direct Mapping

A

Each memory location will go to same place in cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Full Associative cache mapping

A

data can be placed anywhere in the cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Set Associative Cache Mapping

A

Each data element has a set of locations it can be placed in

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Ways for cache to handle discrepencies

A

Write through and write back

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Write through

A

Caches update data in memory whenever it is written to cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Write back

A

cache marks the modified data as dirty and updates the main memory only when the cache line is replaced or evicted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Virtual Memory
Main memory acts as cache and secondary storage acts as main memory, only active parts of the program in main memory
26
Demand Paging
How virtual memory is implemented: where logical memory is divided into 'page-frames' which are mapped to physical memory
27
Page-Table
Data structure where mapping of virtual address pages to physical memory frames is stored
28
Virtual Page Number
when program compiled each page frame is assigned a virtual page number
29
Translation lookaside buffer
Special 'cache like' memory that stores small number of page table entries
30
Page Fault
Trying to access a valid physical address for a page in the table but it is only stored on a disk
31
Instruction Level Parallelism
Attempts to improve of processor by having different parts of the processor or functional parts work simultaneously
32
Pipelining
Functional units are arranged in stages (think assembly line), time of one process entirely plus (slowest step * remaining processes)
33
Multiple Issue
Ability to initiate more than one instruction at a time
34
Speculation
Guessing which instruction to use for Multiple Issue
35
Hardware Multithreading
Systems continue working while task being executed stalls, fine grained and coarse grained
36
Fine grained hardware multithreading
Processor switches between threads after each instruction skipping stalled threads
37
Coarse grained hardware multithreading
Processor switches threads that are stalled waiting for a time-consuming operation to complete
38
Task Parallelism
Partition various tasks carried out solving the problem among the cores
39
Data parallelism
Partition the data used in solving the problem among the cores. Each core carries out similar operations on it’s part of the data
40
Communication
one or more cores send their current partial sums to another core
41
Synchronization
making sure cores do not get too far ahead of the rest
41
Load balancing
share the work evenly among the cores so that one is not heavily loaded
42
Cache Levels
The closer to the cpu the less space and the faster it is ending with main memory or physical memory
43
Flynns Taxonomy
SISD, SIMD, MISD, MIMD, where I is instruction and D is datastream
44
SISD
Essentially Von Neumann, single instruction single data stream
45
SIMD Parallelism Achieved?
Divides data between processors (data parallelism)
46
Drawbacks of SIMD
ALUs idle if not performing specific instructions, must operate synchronously, not for complex problems with complex data
47
Vector Processors
Operate on an array of data compared to conventional CPUs which operate on individual data elements or scalars
48
Vector Registers
Store vector of operands while operating on their contents simultaneously
49
Hardware scatter/gather
program accesses elements of a vector located at fixed intervals
50
Graphics Processing Unit
Use points, lines, and triangles to internally represent surface of an object
51
MIMD
Collection of fully independent cores and allows parallel applications
52
Parallel Applications
Different things to different pieces of data
53
Shared Memory
cores share access to computers memory
54
Distributed Memory
Each core has it's own private memory and must send messages across a network to communicate
55
Uniform Memory Access (UMA)
time to access memory will be the same for all cores
56
Non-Uniform Memory Access (NUMA)
memory locations directly connected to certain CPUs and thus can be accessed faster by some than others
57
Cluster
Collection of commodity systems connected by commodity interconnection network
58
Bus Interconnect
Collection of parallel wires together with access control hardware
59
Direct interconnects
each switch connected to processor memory pair, switches connected to eachother
60
Fully connected network n
n = ((P-1)*P)/2
61
Toroidal Mesh n
n = 2p
62
Ring n
n = p
63
Bisection width
sumber of connections broken when dividing the system in half
64
Fully connected b
p^2/4
65
ring b
2
66
toroidal mesh
2*sqrt(p)
67
Hyper cube n
P/(2logP)
68
Hyper Cube b
p/2
69
Crossbar n
p^2
70
Crossbar b
b = p
71
Omega Network n
(4*p)/(2*logp)
72
Omega Network b
p/2
73
Latency
time that elapses between sources beginning to transmit the data and destination starting to receive the first byte
74
Bandwidth
Rate at which destination receives data after it has started to receive the first byte
75
Message Transmission Time
(latency + message length)/ bandwidth
76
Single Program Multiple Data (SPMD)
Single program that behaves like multiple different programs through threads and conditional branches
77
Race Condition
Situation where behavior is determined by the order of execution, which we cannot control and may change from execution to execution
78
Critical Sections
Sections of code that must be executed in their entirety by a single process/thread before another is allowed
79
Busy-Waiting
Loop until allowed to enter critical section
80
Message-Passing
Send messages back and forth between threads/processes to communicate entry/exit from critical sections
81
Deadlock
Program ceases to function
82
Deadlock Causes
mutual exclusion, hold and wait, non-preemptible resources, circular wait
83
Fosters Methodology
Partitioning, Communication, agglomeration or aggregation, mapping
84
Partitioning
divide computations to be performed and the data operated on by computation into small tasks, focus on potential parallelization
85
Communication
Determine what communication needs to be carried out among partitioned tasks
86
Agglomeration or Aggregation
Combine tasks and communications identified in partitioning into larger tasks, i.e. two tasks might need to happen in a specific order so you combine them
87
Mapping
Assign aggregated tasks to processes/threads with goals of minimizing need for communication and balancing
88
Amdahl's Law
S = 1/(1-p + (p/n))
89