preventing overfitting Flashcards

(25 cards)

1
Q

What is the goal of generalisation in machine learning?

A

To perform well on unseen data, not just the training set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is early stopping?

A

A method that stops training when validation loss begins to increase.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why is early stopping useful?

A

It prevents overfitting and reduces unnecessary computation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does data augmentation do?

A

Expands the training dataset by modifying existing samples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are common image augmentation techniques?

A

Rotation, flipping, noise addition, and brightness variation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a common audio augmentation technique?

A

Pitch shifting or adding background noise.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a text augmentation method?

A

Synonym replacement or sentence shuffling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a drawback of data augmentation?

A

It can introduce irrelevant distortions or noise.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does dropout do in neural networks?

A

Randomly disables neurons during training to reduce overfitting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why does dropout help generalisation?

A

It prevents reliance on specific neurons and encourages redundancy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What happens to dropout during inference?

A

It is disabled and full connectivity is restored.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a potential downside of dropout?

A

Slower training convergence and less reproducibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is explicit regularisation?

A

Adding a penalty to the loss function to control model complexity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does L1 regularisation encourage?

A

Sparsity by driving some weights to zero.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the formula for L1 regularisation penalty?

A

λ × sum of absolute values of weights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does L2 regularisation encourage?

A

Smooth shrinking of all weights.

17
Q

What is the formula for L2 regularisation penalty?

A

λ × sum of squared weights.

18
Q

What is the elastic net?

A

A regularisation method combining L1 and L2 penalties.

19
Q

What does the alpha parameter control in elastic net?

A

The mix between L1 and L2 regularisation.

20
Q

What is one benefit of L1 regularisation?

A

It performs feature selection by zeroing out irrelevant weights.

21
Q

What is one benefit of L2 regularisation?

A

It encourages small, distributed weights to prevent overfitting.

22
Q

What type of validation is needed for early stopping?

A

A separate validation set monitored during training.

23
Q

What kind of models benefit most from dropout?

A

Deep neural networks with many parameters.

24
Q

How does regularisation affect model weights?

A

It discourages overly large or unnecessary weights.

25
Why is overfitting a problem in machine learning?
Because the model learns noise in the training data instead of general patterns.