Chapter 4: Training Linear Models Flashcards

1
Q

What Linear Regression training algorithm can you use if you have a training set with millions of features?

A

If you have a training set with millions of features you can use Stochastic Gradient Descent or Mini-batch Gradient Descent, and perhaps Batch Gradient Descent if the training set fits in memory. But you cannot use the Normal Equation because the computational complexity grows quickly (more than quadratically) with the number of features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Suppose the features in your training set have very different scales. What algorithms might suffer from this, and how? What can you do about it?

A

If the features in your training set have very different scales, the cost function will have the shape of an elongated bowl, so the Gradient Descent algorithms will take a long time to converge. To solve this you should scale the data before training the model. Note that the Normal Equation will work just fine without scaling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Can Gradient Descent get stuck in a local minimum when training a Logistic Regression model?

A

Gradient Descent cannot get stuck in a local minimum when training a Logistic Regression model because the cost function is convex
(If you draw a straight line between any two points on the curve, the line never crosses the curve.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Do all Gradient Descent algorithms lead to the same model provided you let them run long enough?

A

If the optimization problem is convex (such as Linear Regression or Logistic Regression), and assuming the learning rate is not too high, then all Gradient Descent algorithms will approach the global optimum and end up producing fairly similar models. However, unless you gradually reduce the learning rate, Stochastic GD and Mini-batch GD will never truly converge; instead, they will keep jumping back and forth around the global optimum. This means that even if you let them run for a very long time, these Gradient Descent algorithms will produce slightly different models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Suppose you use Batch Gradient Descent and you plot the validation error at every epoch. If you notice that the validation error consistently goes up, what is likely going on? How can you fix this?

A

If the validation error consistently goes up after every epoch, then one possibility is that the learning rate is too high and the algorithm is diverging. If the training error also goes up, then this is clearly the problem and you should reduce the learning rate. However, if the training error is not going up, then your model is overfitting the training set and you should stop training.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Is it a good idea to stop Mini-batch Gradient Descent immediately when the validation error goes up?

A

Due to their random nature, neither Stochastic Gradient Descent nor Mini-batch Gradient Descent is guaranteed to make progress at every single training iteration. So if you immediately stop training when the validation error goes up, you may stop much too early, before the optimum is reached. A better option is to save the model at regular intervals, and when it has not improved for a long time (meaning it will probably never beat the record), you can revert to the best saved model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which Gradient Descent algorithm (among those we discussed) will reach the vicinity of the optimal solution the fastest? Which will actually converge? How can you make the others converge as well?

A

Stochastic Gradient Descent has the fastest training iteration since it considers only one training instance at a time, so it is generally the first to reach the vicinity of the global optimum (or Mini-batch GD with a very small mini-batch size). However, only Batch Gradient Descent will actually converge, given enough training time. As mentioned, Stochastic GD and Mini-batch GD will bounce around the optimum, unless you gradually reduce the learning rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Suppose you are using Polynomial Regression. You plot the learning curves and you notice that there is a large gap between the training error and the validation error. What is happening? What are three ways to solve this?

A

If the validation error is much higher then the training error, this is likely because you model is overfitting the training set. One way to try to fix this is to reduce the polynomial degree: a model with fewer degrees of freedom is less likely to overfit. Another thing you can try is to regularize the model – for example, by adding a l_2 penalty (Ridge) or an l_1 penalty (Lasso) to the cost function. This will also reduce the degrees of freedom of the model. Lastly, you can try to increase the size of the training set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Suppose you are using Ridge Regression and you notice that the training error and the validation error are almost equal and fairly high. Would you say that the model suffers from high bias or high variance? Should you increase the regularization hyperparameter \alpha or reduce it?

A

If both the training error and the validation error are almost equal and fairly high, the model is likely underfitting the training set, which means it has a high bias. You should try reducing the regularization hyperparameter \alpha.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why would you want to use Ridge Regression instead of Linear Regression?

A

A model with some regularization typically performs better than a model without any regularization, so you should generally prefer Ridge Regression over plain Linear Regression
(Moreover, the Normal Equation requires computing the inverse of a matrix, but that matrix is not always invertible. In contrast, the matrix for Ridge regression is always invertible.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why would you want to use Lasso instead of Ridge Regression?

A

Lasso Regression uses an penalty, which tends to push the weights down to exactly zero. This leads to sparse models, where all weights are zero except for the most important weights. This is a way to perform feature selection automatically, which is good if you suspect that only a few features actually matter. When you are not sure, you should prefer Ridge Regression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why would you want to use Elastic Net instead of Lasso?

A

Elastic Net is generally preferred over Lasso since Lasso may behave erratically in some cases (when several features are strongly correlated or when there are more features than training instances). However, it does add an extra hyperparameter to tune. If you just want Lasso without the erratic behavior, you can just use Elastic New with an “l1_ratio” close to 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Suppose you want to classify pictures as outdoor/indoor and daytime/nighttime. Should you implement two Logistic Regression classifiers or one Softmax Regression classifier?

A

If you want to classify pictures as outdoor/indoor and daytime/nighttime, since these are not exclusive classes (i.e., all four combinations are possible) you should train two Logistic Regression classifiers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly