Week 10 Flashcards

1
Q

What is the Loss Function?

A

loss(hw) = sum ((yj - hw(xi))^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the Perceptron Learning Rule?

A

wi = wi + axi(y - hw(x))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How are the weights changed for the Perceptron Learning Rule?

A

If y = 1 but hw(x) = 0:
Make wTx larger so hw(x) outputs 1.
wi increased when xi is positive.

If y = 0 but hw(x) = 1, do opposite.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is an epoch?

A

The time after which we update the weights of the Perceptron Learning Rule.

One example.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are some issues in Linear Classification?

A

Hard threshold.

Function is not differentiable - weights learning could be unpredictable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Under/Overfitting?

A

Underfitting: Model not doing well on seen data.

Overfitting: Model trained too specific on training data (not good for generalisation).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Regularisation?

A

Achieve generalisation in ML.

Require loss and regularisation paramater lamda.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Drop Out?

A

Encourages network to adapt, not memorise.

Drop a certain number of weights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the benefits of Stochastic gradient descent?

A

Quicker to converge in practice.

Helps to avoid overfitting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly