1.1.1 Flashcards
The characteristics of contemporary processors (28 cards)
What does the ALU do?
The ALU performs all arithmetic and logical operations. These operations form the basis of all computer programs and are used for performing calculations and for comparison of values.
The ALU also enables branching of instructions to take place, so enabling the change of flow of the program to occur.
The ALU performs the following calculations:
o Addition
o Subtraction
All other calculations are derived from these two calculations.
What does the control unit and clock do?
The control unit manages the movement of data to and from the ALU
and the registers.
Timing signals are sent from the clock and these
enable the control unit to maintain the sequence of instructions.
What do general purpose registers do?
General purpose registers are used to temporarily store data and act
in the same way as variables in a high-level program.
Types of special purpose registers
-Program Counter (PC)
-The Current Instruction Register (CIR)
-The Memory Address Register (MAR)
-Memory Data Register(MDR)
-The Status Register (SR)
-Accumulator (ACC)
What does the program counter do?
Stores the address of next instruction to be executed
What does the current instruction register do?
Contains the current instruction being executed, divided into operand and opcode.
What does the memory address register do?
Holds the address of the memory location from which data or an instruction is to be fetched or which data is to be written.
What does the memory data register do?
Used to temporarily store the instruction and/or data read from or written to memory. The MDR is sometimes known as the MBR as it acts like a buffer, temporarily storing data before passing it on.
What does the status register do?
Contains bits that are set/reset after instructions are executed, e.g. carry, overflow, zero, negative flags.
What does the accumulator do?
A register for short-term, intermediate storage of arithmetic and logic data in a computer’s CPU. This register is therefore associated with the ALU
What does the data bus do?
This bus is responsible for transferring data from one component to
another.
The bandwidth of the data bus is the amount of data that can be transmitted along the bus and this determines the largest number that can be manipulated by the processor.
What does the address bus do?
This is a bus that specifies physical addresses of memory locations.
When a memory location needs to be read or written to, the location is specified on the address bus.
The bandwidth of the bus determines the largest memory address that can be used, e.g. a 32-bit bus can address less memory than a 64-bit bus.
What does the control bus do?
The control bus carries the signals from the control unit to the CPU
and other sub-components of the CPU.
Features of Von Neumann architecture
- A single Arithmetic and Logic Unit (ALU)
- A single Control Unit (CU)
- Special registers within the CPU
- A single memory unit (RAM) which stores data and programs. Instructions are fetched sequentially, one after another (fetch decode execute cycle). Since data and instructions are sent along the same data bus, instructions can’t be fetched at the same time as data is being sent along the bus, causing what is referred to as the Von Neumann Bottleneck
- Same buses for instructions and data that connect CPU to memory
How is Harvard architecture different to Von Neumann architecture?
- More modern than Von Neumann
- Separate RAM for instructions and data
- It uses separate buses for the transfer of both data and instructions making it capable of fetching both instructions and data and therefore faster than Von Neumann
- It is comparatively more expensive than the Von Neumann Architecture.
What happens during the fetch stage?
- The address of the next instruction is copied from the Program Counter (PC) to the Memory Address Register (MAR).
The Control Unit loads the address on to the address bus and sends a signal to main memory to read the instruction at the address. - The instruction is passed across to the data bus and copied to the Memory Data Register (MDR). The PC is incremented so that it holds the address of the next instruction.
- The contents of the MDR are copied to the Current Instruction Register (CIR).
What happens during the decode stage?
- The instruction held in CIR is decoded by the Control Unit. The instruction is split into opcode and operand.
What happens during the execute stage?
- The instruction (opcode) is carried out on the operand. If the operand holds an address this is copied to MAR. The ALU is used to perform calculations.
How does clock speed improve CPU performance?
-a CPU’s clock speed rate is a measure of how many clock cycles a CPU can perform per second
-the higher the clock speed, the more instructions are executed in a given time frame.
How does the number of cores affect CPU performance?
-Multicore processors have more than one processing unit on a processor chip
-Each core can independently process instructions at the same time i.e. in parallel
-Most modern processors are multi-core.
-This dramatically improves performance while keeping the physical CPU unit small so it fits in a single socket (thus also single cooling, single power source, less latency because the cores can communicate more quickly as they’re all on the same chip).
-Although an operating system can distribute processing across the cores, only software written for multicores can use all cores at the same time. Thus doubling the number of cores seldom doubles the processing speed because writing good multi-threaded code is difficult
How does cache memory improve CPU performance?
- Modern CPUs contain multiple levels of cached memory, normally at least three. By anticipating the data and instructions that are likely to be regularly accessed and keeping these in cache memory, the overall speed at which the processor operates can be increased as accessing cache is faster than accessing RAM.
- The low levels of cache are duplicated for each processor core whereas higher level cache is often shared.
- Level 1, 2 and 3 are usually now on the CPU itself.
- Level 1 cache is the smallest and therefore the quickest.
-Cache uses SRAM (typically faster than DRAM since it doesn’t have refresh cycles).
What is the use of pipelining?
-Attempts to keep every part of the processor busy by dividing instructions into a series of sequential steps to be processed in parallel (progress is being made on more than one task simultaneously). The result of one stage feeds into another.
-Example of instruction level parallelism common in classic RISC processors where each instruction was completed in one clock cycle.
-While one instruction is being fetched, another is being decoded, another executed.
Benefits - increases throughput (the number of instructions that complete in a span of time) by making more efficient use of parts of the CPU
Problems with pipelining
Bubbles - A pipelined processor may deal with delays by stalling, resulting in one or more cycles in which nothing useful happens, known as a bubble.
Flushing - Occurs when a branch instruction (like BRZ, BRA, BRP in LMC) jumps to a new memory location, invalidating all prior stages in the pipeline which need to be cleared.
Advantages of Harvard architecture
-Separate memory for instructions and data, preventing data and instruction overlap.
-Separate buses allow simultaneous fetching of data and instructions, improving performance.
-Higher performance due to parallel data and instruction access.
-Higher memory bandwidth with separate paths for instructions and data.
- Lower risk of data corruption since instructions and data are stored separately.
-Ideal for specialized systems where performance is crucial (e.g., embedded systems, DSPs).