Chapter 6 - Memory Hierarchy Flashcards

1
Q

the locality principle stating that if a data location is referenced then it will tend to be referenced again soon

A

Temporal locality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

the locality principle stating that if a data location is referenced, data locations with nearby addresses will tend to be referenced soon

A

Spatial locality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

a structure that uses multiple levels of memories, as the distance from the processor increases, the size of the memories and the access time both increase

A

Memory hierarchy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

the minimum unit of information that can be either present or not present in a cache

A

Block (or line):

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

the fraction of memory accesses found in a level of the memory hierarchy

A

Hit rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

the fraction of memory accesses not found in a level of the memory hierarchy

A

Miss rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

the time required to access a level of the memory hierarchy, including the time needed to determine whether the access is a hit or miss

A

Hit time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

the time required to fetch a block into a level of the memory hierarchy from the lower level, including the time to access the block, transmit it from one level to the other, insert it in the level that experienced the miss, and then pass the block to the requestor

A

Miss penalty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

one of thousands of concentric circles that make up the surface of a magnetic disk

A

Track

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

one of the segments that make up a track on a magnetic disk; the smallest amount of information that is read or written on a disk

A

Sector

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

the process of positioning a read/write head over the proper track on a disk

A

Seek

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

also called rotational delay. The time required for the desired sector of a disk to rotate under the read/write head; usually assumed to be half the rotation time

A

Rotational latency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

a cache structure in which each memory location is mapped to exactly one location in the cache

A

Direct-mapped cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

a field in a table used for a memory hierarchy that contains the address information required to identify whether the associated block in the hierarchy corresponds to a requested word

A

Tag

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

a field in the tables of a memory hierarchy that indicates that the associated block in the hierarchy contains valid data.

A

Valid bit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

a request for data from the cache that cannot be filled because the data are not in the cache

A

Cache miss

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

a scheme in which writes always update both the cache and the next lower level of the memory hierarchy, ensuring that data are always consistent between the two

A

Write through

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

a queue that holds data while the data are waiting to be written to memory

A

Write buffer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

a scheme that handles writes by updating values only to the block in the cache, then writing the modified block to the lower level of the hierarchy when the block is replaced

A

Write-back

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

a scheme in which a level of the memory hierarchy is composed of two independent caches that operate in parallel with each other with one handling instructions

A

Split cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

a cache structure in which a block can be placed in any location in the cache

A

Fully associative cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

a cache that has a fixed number of locations (at least two) where each block can be placed

A

Set-associate cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

a replacement scheme in which the block replaced is the one that has been unused for the longest time

A

Least recently used (LRU)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

a memory hierarchy with multiple levels of caches, rather than just a cache and main memory

A

Multilevel cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

the fraction of references that miss in all levels of a multilevel cache

A

Global miss rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

the fraction of references to one level of a cache that miss, used in multilevel hierarchies

A

Local miss rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

A code that enables the detection of an error in data, but not the precise location and, hence, correction of the error.

A

Error detection code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

a technique that uses memory as a “cache” for secondary storage

A

Virtual memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

an address in main memory

A

Physical address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

a set of mechanisms for ensuring that multiple processes sharing the processor, memory, or I/O devices cannot interfere, intentionally or unintentionally, with one another by reading or writing each other’s data. The mechanisms also isolate the operating system from a user process.

A

Protection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

an event that occurs when an accessed page is not present in main memory

A

Page fault

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

an address that corresponds to a location in virtual space and is translated by address mapping to a physical address when memory is accessed

A

Virtual address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

also called address mapping. The process by which a virtual address is mapped to an address used to access memory

A

Address translation

34
Q

a variable size address mapping scheme in which an address consists of two parts: a segment number, which is mapped to a physical address, and a segment offset

A

Segmentation

35
Q

the table containing the virtual to physical address translations in a virtual memory system. The table, which is stored in memory, is typically indexed by the virtual page number; each entry in the table contains the physical page number for that virtual page if the page is currently in memory.

A

Page table

36
Q

the space on the disk reserved for the full virtual memory space of a process

A

Swap space

37
Q

also called use bit or access bit. A field that is set whenever a page is accessed and that is used to implement LRU or other replacement schemes

A

Reference bit

38
Q

a cache that keeps track of recently used address mappings to try to avoid an access to the page table

A

Translation-lookaside buffer (TLB):

39
Q

a cache that is accessed with a virtual address rather than a physical address

A

Virtual addressed cache

40
Q

a situation in which two addresses access that same object, it can occur in virtual memory when there are two virtual addresses for the same physical page

A

Aliasing

41
Q

a cache that is addressed by a physical address

A

Physically addressed cache

42
Q

also called kernel mode. A mode indicating that a running process is an operating system process

A

Supervisor mode

43
Q

a special instruction that transfers control from user mode to a dedicated
location in supervisor code space, invoking the exception mechanism in the process

A

System call

44
Q

a changing of the internal state of the processor to allow a different process
to use the processor that includes saving the state needed to return to the currently executing process

A

Context switch

45
Q

also called interrupt enable. A signal or action that controls whether the process responds to an exception or not, necessary for preventing the occurrence of exceptions during intervals before the processor has safely saved the state needed to restart

A

Exception enable

46
Q

an instruction that can resume execution after an exception is resolved without the exception’s affecting the result of the instruction

A

Restartable instruction

47
Q

increasing utilization of a processor by switching to another thread when one thread is stalled.

A

Hardware multithreading

48
Q

includes the program counter, the register state, and the stack. It is a lightweight process; commonly share a single address space, processes don’t

A

Thread

49
Q

includes one or more threads, the address space, and the operating system state.

A

Process

50
Q

a version of hardware multithreading that implies switching between threads after every instruction

A

Fine-grained multithreading

51
Q

a version of hardware multithreading that implies switching between threads only after significant events, such as a last-level cache miss.

A

Coarse-grained multithreading

52
Q

a version of multithreading that lowers the cost of multithreading by utilizing the resources needed for multiple issue, dynamically scheduled microarchitecture

A

Simultaneous multithreading (SMT)

53
Q

a multiprocessor in which latency to any word in main memory is about the same no matter which processor request the access

A

Uniform memory access (UMA)

54
Q

a type of single address space multiprocessor in which some memory accesses are much faster than others depending on which processor asks for which word

A

Nonuniform memory access (NUMA)

55
Q

the process of coordinating the behavior of two or more process, which may be running on different processors.

A

Synchronization

56
Q

a synchronization device that allows access to data to only one processor at a time

A

Lock

57
Q

a function that processes a data structure and returns a single value

A

Reduction

58
Q

an API for share memory multiprocessing in C, C++, or Fortran that runs on UNIX and Microsoft platforms. It includes compiler directives, a library, and runtime directives

A

OpenMP

59
Q

Communication between multiple processors by explicitly sending and receiving information

A

Message passing

60
Q

a routine used by a processor in machines with private memories to pass a message to another processor

A

Send message routine

61
Q

a routine used by a processor in machines with private memories to accept a message from another processor

A

Receive message routine

62
Q

Collection of computers connected via I/O over standard network switches to form a message passing multiprocessor

A

Clusters

63
Q

rather than selling software that is installed and run at customers’ own computers, software is run at a remote site and made available over the Internet typically via a Web interface to customers. charged based on use versus ownership

A

Software as a service (SaaS)

64
Q

informally, the peak transfer rate of a network; can refer to the speed of a single link or the collective transfer rate of all links in the network

A

Network Bandwidth

65
Q

the bandwidth between two equal parts of multiprocessor. This measure is for a worst case split of the multiprocessor

A

Bisection bandwidth

66
Q

a network that connects processor memory nodes by supplying a dedicated communication link between every node

A

Fully connected network

67
Q

a network that supplies a small switch at each node.

A

Multistage network

68
Q

a network that allows any node to communicate with any other node in one pass through the network

A

Crossbar network

69
Q

an I/O scheme in which portions of the address space are assigned to I/O devices, and reads and writes to those addresses are interpreted as commands to the i/O device.

A

Memory-mapped I/O

70
Q

a mechanism that provides a device controller with the ability to transfer data directly to or from the memory without involving the processor

A

Direct memory access (DMA)

71
Q

an I/O scheme that employs interrupts to indicate to the processor that an I/O device needs attention

A

Interrupt-driven I/O

72
Q

a program that controls an I/O device that is attached to the computer

A

Device driver

73
Q

the process of periodically checking the status of an I/O device to determine the need to service the device

A

Polling

74
Q

an organization of disks that uses an array of small and inexpensive disks so as to increase both performance and reliability

A

Redundant arrays of inexpensive disks (RAID)

75
Q

no redundancy, widely used

A

RAID0

76
Q

(Mirroring) EMC, HP(Tandem), IBM

A

RAID1

77
Q

(Error detection and correction code) unused

A

RAID2

78
Q

(bit interleaved parity) storage concepts

A

RAID3

79
Q

block-interleaving parity, network appliance

A

RAID4

80
Q

Distribution block-interleaved parity, widely used

A

RAID5

81
Q

P+Q redundancy, recently popular

A

RAID6