Midterm Flashcards
(109 cards)
Draw Von Neumann architecture of a computer.
Make drawing.
A von Neumann architecture consists of five basic components:
A central processing unit (CPU) A control unit A main memory system Input devices Output devices
The key feature of von Neumann machines is that they can carry out sequential instructions.
What are the two main characteristics of an OS?
Efficiency and convenience.
The two main characteristics of an OS is that is strives to make the computer efficient and convenient. This means that the OS executes user programs and makes solving user problems easier; it makes the computer system convenient to the user, and it uses the hardware in an efficient manner.
An OS can either focus on one or both of these characteristics. E.g., a mobile device is very convenient, but not very efficient (compared to mainframes, which are very efficient, but perhaps not very convenient).
For networked distributed computing, the networks are categorized into four groups in terms of the distance between members. What are these four types of networks?`
WAN - Wide Area Network
LAN - Local Area Network
MAN - Metropolitan Area Network
PAN - Personal Area Network
Computing environments, in terms of the role of connected members of a network, are divided into two types. What are these two types of networks?
Peer-to-peer
Client-server
What is the difference between emulation and virtualization?
Short answer: Emulation is used when the source CPU (physically present) is different from the target CPU type (the CPU that the program is compiled for). E.g., Apple desktops switched from IBM CPU to Intel CPU and old software used Rosetta to run on an emulated IBM CPU. Virtualization allows a guest OS to run as an application on a host OS.
Virtualization is a technique that allows operating systems to run as applications within other operating systems, but on the same CPU. This method works rather efficiently because the applications were compiled for the same instruction set as the target system uses. But, if you have an application or operating system that needs to run on a different CPU, then it will be necessary to translate all of the source CPU’s instructions so that they are turned into equivalent instructions for the target CPU. Such an environment is no longer virtualized but rather is fully emulated.
What is the difference between “full virtualization” and “para-virtualization”?
Short answer: In full virtualization, the guest is an original OS and wants to manage the memory and perform protection, etc. In para-virtualization, guest OS is designed to run as a guest in a virtual environment and is aware of other operating systems and knows its limitations.
Para-virtualization is a technique in which the guest operating system is modified to work in cooperation with the VMM to optimize performance. In contrast, in full virtualization, the guest OS is unaware that it is in fact being run on a VMM.
What are the three types of cloud computing?
Cloud computer delivers computing, storage and apps as a service across a network. Cloud computing is a logical extension of virtualization because it uses virtualization as the base for its functionality. There are three types of cloud computing:
Public cloud: The public cloud is available via internet to anyone who’s willing to pay.
Private cloud: Is run by a company for the company’s own use.
Hybrid cloud: Includes both public and private components.
What are the three advantages of a multiprocessor system over a single processor?
Short answer:
1. Increased throughput; 2. Lower cost than using a collection of single processors; 3. Reliability is higher, and the system is more fault tolerant.
Multiprocessor systems is growing in use and importance. They are also known as parallel systems or tightly-coupled systems. Advantages over a single processor system include:
INCREASED THROUGHPUT. By increasing the number of processors, we expect to get more work done in less time. The speed-up ratio with N processors is not N, however, because with additional processors comes the overhead of getting them to work together correctly.
ECONOMY OF SCALE (multiprocessor systems can cost less than equivalent multiple single-processor systems)
INCREASED RELIABILITY. If functions can be distributed properly among several processors, then the failure of one processor will not halt the system, only slow it down. This increased reliability is crucial in many applications. The ability to continue providing service proportional to the level of surviving hardware is called graceful degradation. Some systems go beyond graceful degradation and are called fault tolerant, because they can suffer a failure of any single component and still continue operation.
What is the difference between “symmetric” and “asymmetric” multiprocessors?
Short answer: Asymmetric multiprocessing is a CPU scheduling method, in which all scheduling decisions, I/O processing, and other system activities are handled by a single processor - the master processor. The other processors execute only user code. This is a simple method because only one core accesses the system data structures, reducing the need for data sharing. The downfall of this approach is that the master processor becomes a bottleneck where overall system performance may be reduced. Symmetric multiprocessing is the standard approach for supporting multiprocessors where each processor is self-scheduling.
Asymmetric multiprocessing is when each processor is assigned a specific task. A “boss” processor controls the system; the other processors either look to the boss for instruction or have predefined tasks. The boss processor schedules and allocates work to the worker processors.
Symmetric multiprocessing is when each processor performs all tasks within the operating system. In this system, all processors are peers. Each processor has its own set of registers, as well as a private, or local, cache. However all processors share physical memory. I/O must be carefully controlled to ensure the right data reaches the right processor.
What are the five activities of process management?
The five activities of process management include:
- Creating and deleting both user and system processes
- Suspending and resuming processes
- Providing mechanisms for process synchronization
- Providing mechanisms for process communication
- Providing mechanisms for deadlock handling
A process is a program in execution. A process needs certain resources, including CPU time, memory, files and I/O devices, to accomplish its task. These resources are either given to the process when it is created or allocated to it while it’s running.
What is the difference between program and process?
A program is a passive entity, such as the contents of a file stored on a disk (or simply a collection os instructions). A program in execution, is a process. A process is active.
What is a memory unit exposed to?
- A stream of addresses + read requests
2. A stream of addresses + data and write requests.
How long does one memory access take?
It takes many CPU cycles. AMAT = cache-hit-time + miss rate * miss penalty.
Static RAM (SRAM): 0.5-2.5 ns $2000-5000 per GB Dynamic RAM (DRAM): 50-70 ns; $20-75 per GB Magnetic disk: 5-20 ms; $0.2-2 per GB
Our ideal memory is the access time of SRAM with but with the capacity and cost/GB of the magnetic disk.
How long does one register access take?
It takes one clock cycle (or less).
Registers that are built into the CPU are generally accessible within one cycle of the CPU clock. Most CPUs can decode instructions and perform simple operations on register contents at the rate of one or more operations per clock tick.
What does memory management mean?
Memory management means a system that determines what is in memory and when. It is a system that optimizes CPU utilization and the computer’s overall response to users.
What does memory management do?
The operating system is responsible for the following activities in connection with memory management:
- Keeping track of which parts of memory are currently being used and who is using them.
- Deciding which processes (or parts of processes) and data to move into and out of memory
- Allocating and deallocating memory space as needed
What is memory hierarchy?
Short answer: Creating a pyramid with slow, cheap and large memory at the bottom and placing fast, expensive and small memories at the top.
A memory hierarchy consists of a pyramid of different kinds of memories. At the top, closest to the CPU, is the cache. This is a relatively small, but very fast memory. Under the cache (there can be multiple levels of cache) is the main memory (DRAM). Cache and DRAM are both volatile storage, which means that they do not keep their data when they don’t have access to power. Beneath the DRAM is where the secondary non-volatile type of storage begins. The first layer is often the sold-state disk, then magnetic disk, the optical disk.
What is locality of reference?
Locality of reference is referring to the fact that when we access a memory address, we often want to access the same address soon again, or an address close to the current address.
What is the effect of “low” locality of reference?
Low locality causes high miss rate, which forces the memory management to refer to slower parts of the hierarchy.
25‐ Suppose reading one word from the main memory takes 50 clks and reading one block of words from memory would take 100 clks. Also, assume that reading one word from cache would take 2clks. What should be the maximum cache “miss rate” for this memory system to worth having the cache rather than directly accessing the main memory?
AMAT = hit rate + miss rate * miss penalty
50 = 2 + x * 100
48 / 100 = x
x = 48%
A 48% miss rate is the maximum miss rate for this memory system to worth having the cache rather than directly accessing the main memory.
What are the primary and secondary memory layers?
The primary volatile storage includes the registers, cache and main memory. The secondary non-volatile storage includes storage that keeps the data even if the power is turned off, e.g., solid state disk, magnetic disk, optical disks, etc.
What are the four categories of memories in terms of “access type?”
RAM - Random Access Memory
SAM - Sequential Access Memory
DASD - Direct Access Storage Device
CAM - Content-Addressable Memory
What is the “inclusion principle” in the context of the memory hierarchy?
The inclusion principle in the context of the memory hierarchy is that everything in level i is included in level i + 1.
What are the four policies or questions in the management of a memory hierarchy?
- Where can a block be placed? (block placement)
- How is a block found if it is in the upper level? (block identification)
- Which block should be replaced on a miss? (replacement strategy)
- What happens on a write? (write strategy)
Thus, the four policies are: block placement, block identification, replacement strategy and write/update strategy.