more optimisation and deep learning frameworks Flashcards
(12 cards)
What are key hyperparameters in deep learning optimisation?
Learning rate, momentum, batch size, optimiser type, learning rate schedule.
What is the benefit of using deep learning frameworks?
They automate gradient computation, training, and GPU acceleration.
What does PyTorch offer as a DL framework?
Dynamic computation graphs and Pythonic design for flexibility.
What is the purpose of PyTorch’s nn.Module class?
To define and organise model architecture and parameters.
What is the role of .backward() in PyTorch?
It computes gradients via backpropagation.
How does TensorFlow compute gradients?
Using tf.GradientTape to record and compute derivatives automatically.
What is the role of an optimiser in a DL framework?
It updates model parameters using computed gradients.
What kind of programming style does JAX encourage?
Functional programming with pure functions and immutable parameters.
What is JAX’s equivalent of automatic differentiation?
The grad() function, which returns a function computing gradients.
What is the forward pass in a neural network?
The computation that produces a prediction from input data.
What is the backward pass in training?
The computation of gradients with respect to loss using backpropagation.
What kind of tasks is Adam especially useful for?
Training deep networks on noisy, sparse, or large datasets.