1.6 Hardware for AI-Based Systems Flashcards
(25 cards)
A variety of hardware
is used for ML model training and implementation.
A ML model that performs speech recognition may run on
a low-end smartphone.
Access to the power of cloud computing may be needed to
train the ML model.
A common approach in ML model creation
is to train the model in the cloud and then deploy it to the host device.
ML hardware supports
- Low-precision arithmetic
- The ability to work with large data structures
- Massively parallel (concurrent) processing.
General-purpose CPUs
provide support for complex operations that are not typically required for ML applications and only provide a few cores.
are less efficient for training and running ML models when compared to GPUs.
GPUs
have thousands of cores and are designed to perform the massively parallel but relatively simple processing of images.
typically outperform CPUs for ML applications
For small-scale ML work GPUs generally offer
the best option.
Some hardware is specially intended for
AI
AI-specific hardware solutions have features such as
multiple cores
are most suitable for edge computing
Neuromorphic processors
do not use the traditional von Neumann architecture
Uses an architecture that loosely mimics brain neurons.
Examples of AI hardware providers and their processors include (as of April 2021):
- NVIDIA
- Intel
- Mobileye
- Apple
- Huawei
NVIDIA
provides a range of GPUs and AI-specific processors
has developed application-specific integrated circuits for both training and inferencing.
Google TPUs
can be accessed by users on the Google Cloud.
The Edge TPU
is a purpose-built ASIC designed to run AI on individual devices.
Intel
provides Nervana neural network processors for deep learning and Movidius Myriad vision processing units for inferencing in computer vision and neural network applications.
Mobileye
produces the EyeQ family of SoC devices to support complex and computationally intense vision processing.
Mobileye devices
have low power consumption for use in vehicles.
Apple
produces the Bionic chip for on-device AI in iPhones.
Huawei
produces their Kirin 970 chip for smartphones with built-in neural network processing for AI.
ASIC
Application-Specific Integrated Circuits (ASICs) are specially intended for AI.
ASIC solutions
have features such as multiple cores
are most suitable for edge computing
edge computing is most suitable for
AI-specific solutions with features such as
multiple cores,
special data management, and
the ability to perform in-memory processing.