week 2 - chatgpt Flashcards
(11 cards)
What is the main assumption behind the Perceptron learning algorithm?
The data must be linearly separable; otherwise
What is the update rule for the Sequential (Online) Perceptron algorithm?
a ← a + η * ω_k * y_k (only if the sample is misclassified).
What is the update rule for the Batch Perceptron algorithm?
a ← a + η * sum of ω_k * y_k over all misclassified samples.
What is the objective of the Minimum Squared Error (MSE) learning procedure?
To minimise the squared difference between predicted outputs and target values.
What is the Batch MSE update rule?
a ← a − η * Yᵀ(Y a − b)
What is the Sequential (Online) MSE update rule (Widrow-Hoff Rule)?
a ← a + η * (b_k − aᵀ y_k) * y_k
When should you use MSE/LMS instead of Perceptron?
When the data is not linearly separable or when you want to minimise errors even for correctly classified points.
What distinguishes Perceptron updates from LMS updates?
Perceptron only updates on misclassified samples; LMS always updates based on prediction error.
Which learning rule forms the basis for gradient descent in neural networks?
The LMS (Widrow-Hoff) rule.
What is a mnemonic for remembering the Perceptron update rule?
Wrong? Push! – Update if misclassified using the class label and input vector.
What is a mnemonic for remembering the LMS update rule?
Error times input – Always adjust weights by the prediction error multiplied by the input.