Linear Regression Flashcards

1
Q

What is regression? Which models can you use to solve a regression problem? 👶

A

Regression is a part of supervised ML. Regression models investigate the relationship between a dependent (target) and independent variable (s) (predictor). Here are some common regression models

Linear Regression establishes a linear relationship between target and predictor (s). It predicts a numeric value and has a shape of a straight line.
Polynomial Regression has a regression equation with the power of independent variable more than 1. It is a curve that fits into the data points.
Ridge Regression helps when predictors are highly correlated (multicollinearity problem). It penalizes the squares of regression coefficients but doesn’t allow the coefficients to reach zeros (uses L2 regularization).
Lasso Regression penalizes the absolute values of regression coefficients and allows some of the coefficients to reach absolute zero (thereby allowing feature selection).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is linear regression? When do we use it? 👶

A

near regression is a model that assumes a linear relationship between the input variables (X) and the single output variable (y).

With a simple equation:

y = B0 + B1*x1 + … + Bn * xN
B is regression coefficients, x values are the independent (explanatory) variables and y is dependent variable.

The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression.

Simple linear regression:

y = B0 + B1*x1
Multiple linear regression:

y = B0 + B1*x1 + … + Bn * xN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Major assumptions of linear regression

A

There are several assumptions of linear regression. If any of them is violated, model predictions and interpretation may be worthless or misleading.

Linear relationship between features and target variable.
Additivity means that the effect of changes in one of the features on the target variable does not depend on values of other features. For example, a model for predicting revenue of a company have of two features - the number of items a sold and the number of items b sold. When company sells more items a the revenue increases and this is independent of the number of items b sold. But, if customers who buy a stop buying b, the additivity assumption is violated.
Features are not correlated (no collinearity) since it can be difficult to separate out the individual effects of collinear features on the target variable.
Errors are independently and identically normally distributed (yi = B0 + B1*x1i + … + errori):
No correlation between errors (consecutive errors in the case of time series data).
Constant variance of errors - homoscedasticity. For example, in case of time series, seasonal patterns can increase errors in seasons with higher activity.
Errors are normally distributed, otherwise some features will have more influence on the target variable than to others. If the error distribution is significantly non-normal, confidence intervals may be too wide or too narrow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

WHat is the normal distribituion and why do we care about it?

A

The normal distribution is a continuous probability distribution whose probability density function takes the following formula:

formula

where μ is the mean and σ is the standard deviation of the distribution.

The normal distribution derives its importance from the Central Limit Theorem, which states that if we draw a large enough number of samples, their mean will follow a normal distribution regardless of the initial distribution of the sample, i.e the distribution of the mean of the samples is normal. It is important that each sample is independent from the other.

This is powerful because it helps us study processes whose population distribution is unknown to us.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do we check if a variable follows the normal distribution? ‍⭐️

A

Plot a histogram out of the sampled data. If you can fit the bell-shaped “normal” curve to the histogram, then the hypothesis that the underlying random variable follows the normal distribution can not be rejected.
Check Skewness and Kurtosis of the sampled data. Skewness = 0 and kurtosis = 3 are typical for a normal distribution, so the farther away they are from these values, the more non-normal the distribution.
Use Kolmogorov-Smirnov or/and Shapiro-Wilk tests for normality. They take into account both Skewness and Kurtosis simultaneously.
Check for Quantile-Quantile plot. It is a scatterplot created by plotting two sets of quantiles against one another. Normal Q-Q plot place the data points in a roughly straight line.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What if we want to build a model for predicting prices? Are prices distributed normally? Do we need to do any pre-processing for prices? ‍⭐️

A

Data is not normal. Specially, real-world datasets or uncleaned datasets always have certain skewness. Same goes for the price prediction. Price of houses or any other thing under consideration depends on a number of factors. So, there’s a great chance of presence of some skewed values i.e outliers if we talk in data science terms.

Yes, you may need to do pre-processing. Most probably, you will need to remove the outliers to make your distribution near-to-normal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What methods for solving linear regression do you know? ‍⭐️

A

To solve linear regression, you need to find the coefficients which minimize the sum of squared errors.

Matrix Algebra method: Let’s say you have X, a matrix of features, and y, a vector with the values you want to predict. After going through the matrix algebra and minimization problem, you get this solution: .Beta = (X transpose X) ^ -1 X trans y.

But solving this requires you to find an inverse, which can be time-consuming, if not impossible. Luckily, there are methods like Singular Value Decomposition (SVD) or QR Decomposition that can reliably calculate this part (called the pseudo-inverse) without actually needing to find an inverse. The popular python ML library sklearn uses SVD to solve least squares.

Alternative method: Gradient Descent. See explanation below.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is gradient descent nd how does it work?

A

Gradient descent is an algorithm that uses calculus concept of gradient to try and reach local or global minima. It works by taking the negative of the gradient in a point of a given function, and updating that point repeatedly using the calculated negative gradient, until the algorithm reaches a local or global minimum, which will cause future iterations of the algorithm to return values that are equal or too close to the current point. It is widely used in machine learning applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

WHat is the normal queation

A

Normal equations are equations obtained by setting equal to zero the partial derivatives of the sum of squared errors (least squares); normal equations allow one to estimate the parameters of a multiple linear regression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What SGD and what is the difference between it and SGD

A

In both gradient descent (GD) and stochastic gradient descent (SGD), you update a set of parameters in an iterative manner to minimize an error function.

The difference lies in how the gradient of the loss function is estimated. In the usual GD, you have to run through ALL the samples in your training set in order to estimate the gradient and do a single update for a parameter in a particular iteration. In SGD, on the other hand, you use ONLY ONE or SUBSET of training sample from your training set to estimate the gradient and do the update for a parameter in a particular iteration. If you use SUBSET, it is called Minibatch Stochastic gradient Descent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Metrics for assessing regression models

A

Mean Squared Error(MSE)
Root Mean Squared Error(RMSE)
Mean Absolute Error(MAE)
R² or Coefficient of Determination
Adjusted R²

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is MSE and RMSE

A

MSE stands for Mean Square Error while RMSE stands for Root Mean Square Error. They are metrics with which we can evaluate models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Boas variance trade off

A

Bias is the error introduced by approximating the true underlying function, which can be quite complex, by a simpler model. Variance is a model sensitivity to changes in the training dataset.

Bias-variance trade-off is a relationship between the expected test error and the variance and the bias - both contribute to the level of the test error and ideally should be as small as possible:

ExpectedTestError = Variance + Bias² + IrreducibleError
But as a model complexity increases, the bias decreases and the variance increases which leads to overfitting. And vice versa, model simplification helps to decrease the variance but it increases the bias which leads to underfitting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly