week 2 - chatgpt Flashcards

(11 cards)

1
Q

What is the main assumption behind the Perceptron learning algorithm?

A

The data must be linearly separable; otherwise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the update rule for the Sequential (Online) Perceptron algorithm?

A

a ← a + η * ω_k * y_k (only if the sample is misclassified).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the update rule for the Batch Perceptron algorithm?

A

a ← a + η * sum of ω_k * y_k over all misclassified samples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the objective of the Minimum Squared Error (MSE) learning procedure?

A

To minimise the squared difference between predicted outputs and target values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the Batch MSE update rule?

A

a ← a − η * Yᵀ(Y a − b)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the Sequential (Online) MSE update rule (Widrow-Hoff Rule)?

A

a ← a + η * (b_k − aᵀ y_k) * y_k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When should you use MSE/LMS instead of Perceptron?

A

When the data is not linearly separable or when you want to minimise errors even for correctly classified points.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What distinguishes Perceptron updates from LMS updates?

A

Perceptron only updates on misclassified samples; LMS always updates based on prediction error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which learning rule forms the basis for gradient descent in neural networks?

A

The LMS (Widrow-Hoff) rule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a mnemonic for remembering the Perceptron update rule?

A

Wrong? Push! – Update if misclassified using the class label and input vector.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a mnemonic for remembering the LMS update rule?

A

Error times input – Always adjust weights by the prediction error multiplied by the input.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly