3- Classical linear regression Flashcards

1
Q

What does the error term (u) capture in a regression?

A

All the effects on the dependent variable not in the explanatory variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the betas in a regression?

A

The gradients associated with each of the theoretical parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is always the first independent variable in a regression?

A

A constant β₀, it represents the y-intercept i.e. value of dependent variable when there’s no input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the population regression line?

A

The sum of the independent variables; the relationship that holds between x and y on average (without error term) y = β₀ + Xβ₁

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Once x and y values are collected how is a line of best fit determined?

A

Minimising the sum of each error term squared

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the error term for each observation plotted?

A

The vertical distance between the population regression line

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What do the independent variable subscripts denote (xᵢₖ)?

A

The first letter is the number of observations and the second letter is the number of parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the 6 assumptions of the linear regression model?

A

-Linearity
-Identification condition
-Exogeneity
-Homoskedasticity
-Normality
-X can be fixed or random

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explain the linearity assumption

A

Linearity in parameters means the betas have index 1 so are not exponential

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain the identification condition

A

Number of observations must be at least as great as the number of parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Explain the exogeneity assumption

A

Expected value of any u on X is zero E(uᵢ|X)=0 meaning no observation conveys any information about u

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain the homoskedasticity assumption

A

Variance of the error term is constant across observations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Explain the normality assumption

A

Error terms are normally distributed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the 3 main properties of the OLS estimator?

A

-Unbiased
-Given variance covariance matrix
-Estimator is the best in that it has minimum variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does it mean that the OLS estimator is unbiased?

A

The expected value of all estimated betas will give their true value E(^βols)= β

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the 2 steps to show the OLS estimator is unbiased?

A
  1. Sub in y to the OLS estimator expression and expand
  2. Take expectations, and show only β remains
17
Q

What is the variance matrix of the OLS estimator var(βₒₗₛ)?

A

var(βₒₗₛ)=σ²(X’X)⁻¹

18
Q

When a function is homoscedastic, what is E(uu’)?

19
Q

How can you prove the variance of the OLS estimator is var(βₒₗₛ)=σ²(X’X)⁻¹?

A
  1. Use equation var(B̂)=E[(B̂ - E(B̂))(B̂ - E(B̂))]
  2. Sub in OLS estimate and expectation and expand
20
Q

What is the OLS estimate of variance?

A

E(û’û)/n-k

21
Q

What is the Gauss-Markov theorem?

A

Unbiased linear estimators is the “Best”, in the sense that it has the minimum variance

22
Q

What are the 3 conditions for the Gauss-Markov theorem?

A
  1. y=Xβ+u
  2. var(u) = σ²I
  3. X is full rank
23
Q

What is the identity P?

A

X(X’X)⁻¹X’

24
Q

What is the identity M?

25
What are 3 main properties of M?
-Square -Symmetric -Idempotent
26
How can you show X is uncorrelated to u?
E[(X'X)⁻¹X'u] = 0
27
What is the variance of a given error term var(uᵢ)?
E(u²) = σ²
28
If u has a multivariate normal distribution with zero mean, what is the distribution of uᵢ?
Univariate normal distribution
29
What happens to the estimated coefficients (β) when X is scaled up or down?
Coefficients are scaled in the opposite direction to compensate
30
What does the Beta variance covariance matrix look like?
Beta variances on principal diagonal and covariances in rest of elements
31
What is a trick to find the n?
n is the first element of X'X
32
What expression is always the error term estimate û?
û = (I - X(X'X)⁻¹X')u
33
What is tr(E(û'û))?
nσ²
34
How can var(Xβ+u) be simplified?
var(u)
35
How do betas change with operators on y?
Multiplication is performed on all betas in the same direction, addition is also applied to the intercept