Regularization Flashcards

1
Q

Regularization

A

Regularization is a technique used in machine learning to prevent overfitting by adding an extra penalty to the loss function. It essentially limits the complexity of the model, improving its ability to generalize to unseen data. In summary, regularization is a powerful technique to prevent overfitting and improve the generalization ability of machine learning models. However, it requires careful tuning of the regularization term and its strength.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. Definition
A

Regularization is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting. It works by adding a penalty term to the loss function that the model seeks to minimize.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

a. L1 Regularization (Lasso)

A

L1 regularization adds a penalty equivalent to the absolute value of the magnitude of the coefficients. This can result in some coefficients being shrunk to zero, effectively performing feature selection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

b. L2 Regularization (Ridge)

A

L2 regularization adds a penalty equivalent to the square of the magnitude of the coefficients. This tends to spread the coefficient values out more evenly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

c. Elastic Net Regularization

A

Elastic net is a mix of L1 and L2 regularization. It combines the penalties of Lasso and Ridge, allowing for both feature selection and spreading of coefficient values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  1. Usage in Machine Learning Models
A

Regularization is used in many machine learning algorithms, including linear regression, logistic regression, support vector machines, and neural networks. It’s a key part of the training process, and its goal is to prevent overfitting and improve model generalization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  1. Advantages
A

Regularization can help prevent overfitting by constraining the model’s complexity. It can also help with feature selection, especially in the case of L1 regularization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  1. Limitations
A

The choice of regularization term and its strength (often denoted by lambda or alpha) is crucial and can greatly influence the model’s performance. These hyperparameters often need to be tuned using techniques like cross-validation. Too much regularization can also lead to underfitting, where the model is too simple to capture the underlying patterns in the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  1. Parameter Tuning
A

The strength of the regularization is controlled by a hyperparameter. This hyperparameter needs to be carefully tuned to find the right level of regularization. Techniques such as grid search, random search, or Bayesian optimization can be used to find the optimal value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly