Neural Networks and Deep Learning Foundations Flashcards
What is the main difference between Traditional Machine Learning and Deep Learning?
Traditional Machine Learning involves manually selecting features, while Deep Learning learns features automatically from raw data.
What is the purpose of Gradient Descent in machine learning?
To reduce errors in predictions and improve the model’s accuracy.
What are the steps involved in the Backpropagation process?
- Forward Pass
- Compute Loss
- Backpropagation
- Gradient Descent
True or False: Neural Networks are inspired by the structure of the human brain.
True
What are the three types of layers in a Neural Network?
- Input Layer
- Hidden Layers
- Output Layer
Fill in the blank: The equation for a prediction in a neural network is Prediction = Input × Weight → _______.
[Activation Function]
What are the two types of Loss Functions mentioned?
- Binary Cross-Entropy
- Categorical Cross-Entropy
What is the role of an Optimizer in deep learning?
To help the AI learn faster by adjusting learning rates and strategies.
What dataset is used in the example of Handwritten Digit Recognition?
MNIST Dataset
What is the first step in building a simple neural network using Keras?
Import Libraries
What is the purpose of normalizing pixel values in the MNIST dataset?
To improve training efficiency by scaling values from 0-255 to 0-1.
What is One-Hot Encoding used for in the context of neural networks?
To convert labels into a format the neural network can understand.
What activation function is used in the first layer of the example neural network?
ReLU (Rectified Linear Unit)
What metric is used to evaluate the performance of the model on test data?
Accuracy
True or False: Keras simplifies the process of building neural networks.
True
What is the goal of training a neural network on the MNIST dataset?
To correctly predict handwritten digits (0-9).
What is the goal of Backpropagation and Gradient Descent?
Minimize the error between predicted and actual outputs.
What do activation functions prevent in neural networks?
They prevent neural networks from behaving like linear regression models and allow them to learn complex relationships.
What is the equation without activation functions?
output = dot(W, input) + b
What is the equation with activation functions?
output = ReLU(dot(W, input) + b)
What is a Linear Activation Function?
Output = Input (Straight line)
What is the main issue with the Sigmoid activation function?
Vanishing Gradient – When values go beyond ±3, the gradient becomes tiny, and learning slows down.
What does the Softmax activation function do?
Converts values into probabilities that sum to 1.
What is an example of Softmax output?
- Cat: 70%
- Dog: 20%
- Bird: 10%