L2 - Base Technologies Flashcards

1
Q

What are process technologies?

A

About the processes to build the chips. Silicon wavers are used to replicate chips.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is 10nm?

A

width of the fins on the chip

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the aim concerning computer chips?

A

Put as many transistors as possible on it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What did the increase in transistors lead to?

A

Increase in clock speed stopped because otherwise, the heat increases –> Single-thread performance is stopped by clock speed.

–> need for a performance increase and energy reduction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why are more cores used in CPU?

A

Because they are like little CPUs that can work independently –> increase of performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Trends concerning CPU

A
  • multi-core processors
  • SIMD support
  • combination of core private and shared cashes
  • heterogeneity
  • hardware support for energy control
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is SIMD?

A

Single Instruction Multiple Data. SIMD describes computers with multiple processing elements that simultaneously perform the same operation on multiple data points. –> data parallelism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the combination of core private and shared cashes about?

A

This keeps the data near to the CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is heterogeneity about when talking about CPU?

A

Processors might have different cores. You specialize your hardware for the process you are doing –> less energy consumption.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is hardware support for energy control about?

A

You can adapt the processor for the needs of the application. E.g. switching off the other cores or lower the clock frequency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a challenge when it comes to CPU?

A

Feeding the processor: the memory hierarchy
-> Use data hierarchy in an efficient way to keep the data as close the CPU as possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is special about the Intel Kaby Lake Processor?

A
  • system agent manages CPU energy control
  • you have a ring interconnection network that goes around the cores and enables them to communicate
  • the level 3 cash is near to the core
  • GPU is already integrated in the processor
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is special about the Skylake XP Socket?

A
  • multi-dimensional mash network that serves as communication for the cores
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is special about ARM processor designs?

A

These processors are integrated into Systems on a Chip (SoC).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

SoC (from the internet)

A

A System-on-a-Chip (SoC) is an integrated circuit (IC) that integrates all the components of a computer or electronic system into a single chip. It is a type of microcontroller that includes a microprocessor, memory, and other components such as input/output interfaces, power management, and communication interfaces all integrated into a single piece of silicon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is ARM Big Little

A
  • you have clusters of processors
  • with cluster switching you can pick one cluster at a time (use either the high performance cluster or the low performance cluster)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is an alternative to cluster switching?

A

Global task scheduling. Here you distribute computation more flexibly among big and little processors.

18
Q

What is a GPGPU?

A

General Purpose Graphics Processors (GPGPU) is the use of a graphics processing unit to perform computation in applications traditionally handle by the CPU

19
Q

What is an accelerator? (From the internet)

A

An accelerator for a CPU is a device or system that is designed to offload certain computational tasks from the central processing unit (CPU) in order to improve performance. These tasks can include things like data compression, encryption, machine learning inferencing, and image processing. Examples of accelerators include graphics processing units (GPUs), digital signal processors (DSPs), field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs).

20
Q

What is the aim of accelerator programming?

A

Increasing the computational speed.

21
Q

What is the motivation for accelerators?

A
  • increase computational speed
  • reduce energy consumption
22
Q

3 ways to increase computational speed and reduce energy consumption.

A

Through specialization in:
- operations
- on-chip communication
- memory access

23
Q

3 types of accelerators

A
  • GPGPs
  • Many standard cores
  • FPGAs
24
Q

Are costs for FPGA decreasing? What about ASIC?

A

Yes. ASIC costs increase.

25
Q

What can FPGA be used for?

A

–> can be used for big data scenarios.

26
Q

What is an FPGA?

A

Field Programmable Gate Array: Designed to be configured by a customer or a designer after manufacturing.
- array of logical gates to implement hardware programmed special functions (you ca program algorithms in the hardware)

27
Q

3 system integration designs for accelerators

A
  • nodes with attached accelerators (SPU and accelerator on board)
  • accelerator only design (no standard core (CPU) anymore)
  • accelerator booster (dynamically allocate a fraction of the booster for certain computations)
28
Q

Challenges connected to GPGPUs

A
  • programming a GPGPU
  • coordinating the scheduling of computation on the system processor (CPU) and the GPU
  • managing the transfer of data between system memory (RAM) and GPU memory
29
Q

How to manage the transfer of data between system memory (RAM) and GPU memory

A

NVIDIA developed CUDA language for that

30
Q

What is multithreading?

A

The ability of the CPU to provide multiple threads of execution concurrently, supported by OS. Threads share the resources of a single or multiple cores

31
Q

MIMD

A

Multiple instruction, and multiple data is a technique employed to achieve parallelism. Machines using MIMD have a number of processors that function asynchronously and independently.

32
Q

SIMD

A

Single instruction, multiple data is a type of parallel processing. SIMD describes computers with multiple processing elements that simultaneously perform the same operation on multiple data points.

33
Q

Difference Multithreading and MIMD

A

Multithreading:

  • multiple threads single process
  • threads share same memory space (direct communication)
  • single processor (multicore) systems

MIMD:
- threads can run different processes
- each thread has own memory space and runs independently
- multiple processor systems connected by network
- used in high-performance computing

34
Q

ILP

A

Instruction-level parallelism (ILP). Is a family of processor and compiler design techniques that speed up execution by causing individual machine operations to execute in parallel.

35
Q

HBM

A

High Bandwidth Memory (180GB/sec)

36
Q

Multi-instance GPU (MIG)

A
  • with MIG you can now partition the GPU in the - hardware
  • you also slice the Dynamic random access memory (DRAM) and cash etc
  • so each virtual machine can get one partition
    → MIG provides data and performance isolation
  • In CC you have a physical server and a virtual server when you ask for a server
  • there are multiple virtual servers running on physical server
  • The question is which of the virtual servers gets the actual GPU that is attached to the physical servers
37
Q

NUMA

A

Non Uniform Memory Access

multiple CPUs with multiple cores
- share same memory
- can communicate with each other by reading and writing data on the memory
- single physical address space

38
Q

What is a distributed memory system?

A

A distributed memory system is a type of computer system where multiple processors are connected by a network, each with its own local memory. The processors in a distributed memory system can communicate and coordinate with each other, but they do not share a common memory. This is in contrast to a shared memory system, where all processors have access to a common memory space, and can directly read and write memory locations.

39
Q

Difference of distributed memory systems or clusters compared to NUMA

A

Distributed memory systems:

  • multiple processors are connected by a network, each with its own local memory
  • often used in high-performance computing applications
  • programming is more complex because of the distribution of data and computation across the processors their communication
  • allows for the use of larger number of processors and the ability to scale up the system by adding more processors.

NUMA:
- all processors have access to a common memory space, and can directly read and write memory locations

40
Q

PUE

A

Power usage effectiveness (PUE)
- measure of how efficiently a computer data center uses its power
PUE = total facility power/ IT Equipment power
-> ideally ratio of 1
-> can be reduced by putting data centers to Iceland

41
Q

ERE

A

Energy Reuse Effectivness
- measure of how efficient a data center reuses the power dissipated by the computer
ERE = Total facility power - reuse/ IT Equipment power
–> Ideally ratio of 0
–> E.g. use heat generated by computers to warm offices

42
Q

Are costs for FPGA decreasing? What about ASIC?

A

Yes. ASIC costs increase.