IFN580 Week 9 Deep Learning Flashcards
(13 cards)
Why is stacking non-linearities on non-linearities important in Deep Learning?
Because it allows the layers to learn more complex, higher-level functions.
The “deep” in “deep learning” refers to
the number of layers in the neural network
Is feature engineering always necessary with deep learning models, and if not, why?
No, they are designed to automatically learn relevant features
How to prevent overfitting in Deep learning?
Regularisation and Dropout, early stopping
When designing a deep neural network, do all layers have to contain the same
number of parameters?
No, different layers can contain a different number of neurons and even be of
completely different types
What is a loss function?
measures how wrong the model’s predictions are
What is an autoencoder?
a type of neural network that compresses data into lower dimensional representation (latent space) and reconstructs it
What is the latent space?
it’s the compressed representation of the data. it captures the most important features needed to reconstruct the original
How does an autoencoder work?
ENCODER takes input, compresses it
DECODER takes compressed input and reconstructs it
What is Recurrent Neural Network (RNN)?
An NN that processes data sequentially. it depends on the previous hidden state
What is an LSTM (Long Short-Term memory)
A type of NN that’s designed to learn long-term dependencies. Also sequential but more sophisticated.
Why does LSTM use activation functions
Uses activation functions as gates to control flow of information
What is a Transformer model?
A type of NN that uses self-attention mechanisms to processes data parallel