Regularization Flashcards

1
Q

What happens to our linear regression model if we have three columns in our data: x, y, z  —  and z is a sum of x and y?

A

We would not be able to perform the resgression. Because z is linearly dependent on x and y so when performing the regression (X**T) ×X would be a singular (not invertible) matrix.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What happens to our linear regression model if the column z in the data is a sum of columns x and y and some random noise?

A

This is a problem because predictor variables should be independent. If degree of correlation is high it can cause problems when training the model and interpreting the results.

key goal of regression analysis is to isolate the relationship between each independent variable and the dependent variable. The interpretation of a regression coefficient is that it represents the mean change in the dependent variable for each 1 unit change in an independent variable when you hold all of the other independent variables constant. That last portion is crucial for our discussion about multicollinearity.

The idea is that you can change the value of one independent variable and not the others. However, when independent variables are correlated, it indicates that changes in one variable are associated with shifts in another variable. The stronger the correlation, the more difficult it is to change one variable without changing another. It becomes difficult for the model to estimate the relationship between each independent variable and the dependent variable independently because the independent variables tend to change in unison.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is regularization? Why do we need it?

A

Regularization is used to reduce overfitting in machine learning models. It helps the models to generalize well and make them robust to outliers and noise in the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which regularization techniques do you know?

A

There are mainly two types of regularization,

L1 Regularization (Lasso regularization) - Adds the sum of absolute values of the coefficients to the cost function.

L2 Regularization (Ridge regularization) - Adds the sum of squares of coefficients to the cost function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What kind of regularization techniques are applicable to linear models?

A

AIC/BIC, Ridge regression, Lasso, Elastic Net, Basis pursuit denoising, Rudin–Osher–Fatemi model (TV), Potts model, RLAD, Dantzig Selector,SLOPE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does L2 regularization look like in a linear model?

A

L2 regularization adds a penalty term to our cost function which is equal to the sum of squares of models coefficients multiplied by a lambda hyperparameter. This technique makes sure that the coefficients are close to zero and is widely used in cases when we have a lot of features that might correlate with each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do we select the right regularization parameters?

A

Regularization parameters can be chosen using a grid search, for example

https://scikit-learn.org/stable/modules/linear_model.html

has one formula for the implementing for regularization, alpha in the formula mentioned can be found by doing a RandomSearch or a GridSearch on a set of values and selecting the alpha which gives the least cross validation or validation error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What’s the effect of L2 regularization on the weights of a linear model?

A

L2 regularization penalizes larger weights more severely (due to the squared penalty term), which encourages weight values to decay toward zero.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How L1 regularization looks like in a linear model?

A

L1 regularization adds a penalty term to our cost function which is equal to the sum of modules of models coefficients multiplied by a lambda hyperparameter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What’s the difference between L2 and L1 regularization?

A

Penalty terms: L1 regularization uses the sum of the absolute values of the weights, while L2 regularization uses the sum of the weights squared.
Feature selection: L1 performs feature selection by reducing the coefficients of some predictors to 0, while L2 does not.
Computational efficiency: L2 has an analytical solution, while L1 does not.
Multicollinearity: L2 addresses multicollinearity by constraining the coefficient norm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Can we have both L1 and L2 regularization components in a linear model?

A

Yes, elastic net regularization combines L1 and L2 regularization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What’s the interpretation of the bias term in linear models?

A

Bias is simply, a difference between predicted value and actual/true value. It can be interpreted as the distance from the average prediction and true value i.e. true value minus mean(predictions). But dont get confused between accuracy and bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do we interpret weights in linear models?

A

Without normalizing weights or variables, if you increase the corresponding predictor by one unit, the coefficient represents on average how much the output changes. By the way, this interpretation still works for logistic regression - if you increase the corresponding predictor by one unit, the weight represents the change in the log of the odds.

If the variables are normalized, we can interpret weights in linear models like the importance of this variable in the predicted result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

If a weight for one variable is higher than for another  —  can we say that this variable is more important?

A

Yes - if your predictor variables are normalized.

Without normalization, the weight represents the change in the output per unit change in the predictor. If you have a predictor with a huge range and scale that is used to predict an output with a very small range - for example, using each nation’s GDP to predict maternal mortality rates - your coefficient should be very small. That does not necessarily mean that this predictor variable is not important compared to the others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When do we need to perform feature normalization for linear models? When it’s okay not to do it?

A

Feature normalization is necessary for L1 and L2 regularizations. The idea of both methods is to penalize all the features relatively equally. This can’t be done effectively if every feature is scaled differently.

Linear regression without regularization techniques can be used without feature normalization. Also, regularization can help to make the analytical solution more stable, — it adds the regularization matrix to the feature matrix before inverting it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly