Ch 5 Flashcards Preview

CSC 214 > Ch 5 > Flashcards

Flashcards in Ch 5 Deck (54)
Loading flashcards...
1
Q

Little Endian

A

Least significant bit first. High-precision arithmetic on these types of machines if faster & easier.

2
Q

Big Endian

A

Most significant bytes at the lower addresses. More natural to most people & thus makes it easier to read hex dumps. These types of machines store integers & strings in the fame order.

3
Q

Stack Architecture

A

Uses a ____ to execute instructions.

The operands are found on top of the stack.

4
Q

Accumulator Architectures

A

One operand implicitly in the _______, minimize the internal complexity of the machine an allow for very short instructions. Because the _______ is only temporary storage, memory traffic is very high.

5
Q

General-Purpose Register Architectures

A

Uses sets of ______-_______ registers

6
Q

Memory-Memory Architectures

A

May have two or three operands in memory, allowing an instruction to perform an operation without requiring any operand to be in a register.

7
Q

Register-Memory Architectures

A

Require a mix, where at least one operand is in a register and one is in memory.

8
Q

Load-Store Architecture

A

Requires data to be moved into registers before any operations on those data are performed.

9
Q

Fixed length

A

Wastes space bus is fast and results in better performance when instruction-level pipelining is used.

10
Q

Variable Length

A

More complex to decode but saves storage space.

11
Q

Opcode Only (zero address)

A

In architectures based on stacks. Need push and pop instructions.
PUSH X places the data value found at memory location X onto the stack.
POP X removes the top element in the stack and stores it at location X.

12
Q

Reverse Polish Notation (RPN)

A

This representation places the operator after the operands in what is known as postfix notation. Every operator follows its operands in any expression. If the expression contains more than one operation, the operator is given immediately after its second operand.
12 / (4 + 2) = 1 2 4 2 + /
(2 + 3) - 6 / 3 = 2 3 + 6 3 / -

13
Q

Three Operands Allowed

A

When 3 operands are allowed, at least one operand must be a register, and the first operand is normally the destination.
Given this expression, Z = (X * Y) + (W * U)
We get this code:
Mult R1, X, Y
Mult R2, W, U
Add Z, R2, R1

14
Q

Two Operands Allowed

A

When using two-address instructions, normally one address specifies a register. The other operand could be either a register or a memory address.
Given this expression, Z = (X * Y) + (W * U)
We get this code:
Load R1, X
Mult R1, Y
Load R2, W
Mult R2, U
Add R1, R2
Store Z, R1
*Note that it is important to know whether the first operand is the source or the destination.

15
Q

One Operand Allowed

A
We must assume a register is implied as the destination for the result of the instruction. 
Given this expression, Z = (X * Y) + (W * U)
We get this code:
Load X
Mult  Y
Store Temp
Load W
Mult  U
Add  Temp
Store Z
16
Q

Zero Operands Allowed

A

Stack-based architectures use no operands for instructions such as Add, Subt, Mult, or Divide. WE need a stack and two operations on that stack: Push and Pop
Push places the operand on top of the stack.
Pop removes the stack top and places it in the operand.
Given this expression, Z = (X * Y) + (W * U)
We get this code:
Push X
Push Y
Mult
Push W
Push U
Mult
Add
Pop Z

17
Q

Expanding Opcodes

A

Represent a compromise between the need for a rich set of opcodes and the desire to have short opcodes.

18
Q

Data Movement

A

These instructions are the most frequently used instructions. Data is moved from memory into registers, from registers to registers, and from registers to memory, and many machines provide different instructions depending on the source and destination.

19
Q

Arithmetic Operations

A

These include those instructions that use integers and floating point numbers. Many instruction sets provide different arithmetic instructions for various data sizes.

20
Q

Boolean Logic Instructions

A

These instructions perform ______ operations, much in the same way that arithmetic operations work. These instructions allow bits to be set, cleared, and complemented. Commonly used to control I/O devices.

21
Q

Bit Manipulation Instructions

A

Used for setting and resetting individual bits within a given data word. Includes both arithmetic and logical SHIFT instructions and ROTATE instructions, each to the left and to the right.

22
Q

Input/Output Instructions

A

Vary greatly from architecture to architecture. The input instruction transfers data from a device or port to either memory or a specific register. The output instruction transfers data from a register or memory to a specific port or device.

23
Q

Transfer of Control

A

Control instructions are used to alter the normal sequence of program execution. These instructions include branches, skips, procedure calls, returns, and program termination.

24
Q

Special Purpose Instructions

A

Special purpose instructions include those used for string processing, high-level language support, protection, flag control, word/byte conversions, cache management, register access, address calculation, no-ops, and any other instructions that don’t fit into the previous categories.

25
Q

Orthogonality

A

Each instruction should perform a unique function without duplicating any other instruction. Not only must the instructions be independent, but the instruction set must be consistent.

26
Q

Addressing Modes

A

Allow us to specify where the instruction operands are located. Can specify a constant, a register, or a location in memory.

27
Q

Effective Address

A

Location of the actual operand.

28
Q

Immediate Addressing

A

The value to be referenced immediately follows the operation code in the instruction. The data to be operated on is part of the instruction. Very fast because the value to be loaded is included in the instruction.

29
Q

Direct Addressing

A

The value to be referenced is obtained by specifying its memory address directly in the instruction. Typically quite fast because, although the value to be loaded is not included in the instruction, it is quickly accessible.

30
Q

Register Addressing

A

A register, instead of memory is used to specify the operand. Very similar to direct addressing, except that instead of a memory address, the address field contains a register reference.

31
Q

Indirect Addressing

A

The bits in the address field specify a memory address that is to be used as a pointer. The effective address of the operand is found by going to this memory address.

32
Q

Register Indirect Addressing

A

The operand bits specify a register instead of a memory address.

33
Q

Indexed Addressing

A

An index register is used to store an offset, which is added to the operand, resulting in the effective address of the data.

34
Q

Based Addressing

A

A base address, rather than an index register, is used. A base register holds a base address, where the address field represents a displacement from this base.

35
Q

Stack Addressing Mode

A

The operand is assumed to be on the stack.

36
Q

Indirect Indexed Addressing

A

Uses both indirect and indexed addressing at the same time.

37
Q

Base/Offset Addressing

A

Adds an offset to a specific base register and then adds this to the specified operand.

38
Q

Auto-Increment and Auto-Decrement

A

Automatically increments or decrements the register used, thus reducing the code size.

39
Q

Self-Relative Addressing

A

Computes the address of the operand as an offset from the current instruction.

40
Q

Pipelining

A

Some CPUs break the fetch-decode-execute cycle down into smaller steps, where some of these smaller steps can be performed in parallel. This overlapping speeds up execution.

41
Q

Instruction Level Parallelism (ILP)

A

Involves the use to techniques to allow the execution of overlapping instructions. Essentially, we want to allow more than one instruction within a single program to execute concurrently.

42
Q

Pipeline Stage

A

Different steps are completing different parts of different instructions in parallel.

43
Q

Formula for Calculating Pipeline Time

A

To complete n tasks using a k-stage pipeline requires:

(k * tp) + (n - 1)tp = (k + n - 1)tp

44
Q

SpeedUp Formula

A

Speedup S = (ntn) / [(k + n - 1)tp]

45
Q

Theoretical Speedup Formula

A

Speedup = (k * tp) / tp = k

46
Q

Resource Conflicts (Structural Hazards)

A

If one instruction is storing a value to memory while another is being fetched from memory, both need access to memory. Typically this is resolved by allowing the instruction executing to continue, while forcing the instruction fetch to wait.

47
Q

Data Dependencies

A

Arise when the result of one instruction, not yet available, is to be used as an operand to a following instruction.

48
Q

Branch Prediction

A

Using logic to make the best guess as to which instructions will be needed next (essentially, they are predicting the outcome of a conditional branch).

49
Q

Delayed Branch

A

Rearranging the machine code. Reorder and insert useful instructions.

50
Q

Explicitly Parallel Instruction Computers (EPIC)

A

Have very large instructions which specify several operations to be done in parallel. Heavily compiler dependent (which means a user needs a sophisticated compiler to take advantage of the parallelism to gain significant performance advantages).

51
Q

Program-Level Paralellism (PLP)

A

Actually allows parts of a program to run on more than one computer.

52
Q

Superscalar Architectures

A

Perform multiple operations at the same time by employing parallel pipelines.

53
Q

Superpipelining Architectures

A

Combine superscalar concepts with pipelining, by dividing the pipeline stages into smaller pieces.

54
Q

VILW Architecture

A

Each instruction can specify multiple scalar operations (the compiler puts multiple operations into a single instruction).