Unit 5 Flashcards

1
Q

Cluster Configuration:

A

Cluster Configuration:
a) Standby Server with no shared disk
b) Shared disk
 A cluster is a group of computers that are connected together.
 Each computer in a cluster is known as a node.
 Each node consists of processors, memory, I/O and a link to the network that
connects all node together.
a) A stand by server with no shared disk - - - - -
It is a two node cluster
The interconnection between two nodes is by means of a high speed message link.
It is used for message exchange to co–ordinate cluster activity.
Here each computer is a multiprocessor.
This arrangement provides high performance and high availability.
b) A shared disk cluster - There is a high speed message link between nodes. - -
In addition, there is a disk subsystem that is directly linked to multiple computers
within a cluster.
Here the common subsystem is a RAID (Redundant Array of Inexpensive Disk)
system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

NUMA

A

NUMA organizations:
NUMA and Vector: -
A NUMA (Non Uniform Memory Access) system is a shared memory
multiprocessor.
CC-NUMA (Cache Coherent) Organization: - - - - - - - - - -
There are multiple independent nodes
Each node has multiple processors.
Each processor has its own L1 and L2 caches.
Each node also has its own main memory.
The nodes are interconnected by an interconnection network.
When a processor initiates a memory access, if the requested memory location is not
in that processor’s cache, then the L2 cache initiates a fetch operation.
If the desired line is the local portion of the main memory, then the line is fetched
across the local bus.
If the desired line is in a remote portion of the main memory, then an automatic
request is sent out to fetch that line across the interconnection network.
Then the line is delivered to the local bus and then delivered it to the requesting cache
on that bus.
All of the activity is automatic and transparent to the processor and its cache.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Vector computation

A

Pipelined ALU:

Utilizes input registers and a pipelined ALU for floating-point operations.

A vector is processed sequentially through stages like compare, shift, add, and normalize, allowing concurrent operation on multiple numbers.

Parallel ALUs:

Uses multiple ALUs within a single processor, controlled by one control unit.

Data is routed to the ALUs in parallel, and pipelining can be applied within each ALU.

Parallel Processors:

Involves multiple processors working in parallel on different parts of a task.

Effective coordination between software and hardware is required for this approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Multicore Organization:

A

Dedicated L1 Cache:

Each core has its own dedicated L1 cache, divided into instruction and data caches.only on chip cache

Example: ARMII MP Core.

Dedicated L2 Cache:

Each core has its own dedicated L1 and L2 caches.

L1 cache is divided into instruction and data caches, and no on-chip cache sharing occurs.

Example: AMD Opteron.

Shared L2 Cache:

Each core has its own dedicated L1 cache, but all cores share an L2 cache.

L1 cache is divided into instruction and data caches.

Example: Intel Core Duo.

Shared L3 Cache:

Each core has its own dedicated L1 and L2 caches, and all cores share an L3 cache.

L1 cache is divided into instruction and data caches.

Example: Intel Core i7.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

multithreading, and what are the key terms related to it?

A

Multithreading is the practice of executing multiple smaller streams of instructions, known as threads, in parallel, to achieve instruction-level parallelism without increasing circuit complexity.

Key terms related to multithreading:

Process:

An instance of a program running on a computer. It owns resources like memory, I/O devices, and files.

Resource Ownership: The process has a virtual address space and can own resources like memory, I/O devices, etc.

Scheduling/Execution: A process has an execution path and priority, scheduled by the OS.

Process Switch:

The act of switching the processor from one process to another by saving and replacing process control data and registers.

Thread:

A dispatchable unit of work within a process. A thread executes sequentially and is interruptible, allowing the processor to switch to another thread.

Thread Switch:

The act of switching processor control from one thread to another within the same process. Multiple threads share the same resources within a process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Implicit and Explicit multithreading

A

Implicit Multithreading:

Concurrent execution of threads extracted from a single program.

Defined by compiler (statistically) or hardware (dynamically).

Explicit Multithreading:

Concurrent execution of instructions from different explicit threads.

Can be on shared pipelines or parallel pipelines.

User-level threads: Visible to the application program.

Kernel-level threads: Visible only to the Operating System.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly