5- Heteroskedasticity Flashcards

1
Q

What is the result of OLS under heteroskedasticity?

A

The result is still β but OLS is no longer efficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the 4 main consequences of Heteroskedasticity?

A

1.T-statistics using the standard error are not valid
2.Regression estimates can’t be used for confidence intervals or inferences
3.t and F statistics no longer reliable for hypothesis testing
4.Rejection of the null hypothesis too often

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the informal way of detecting Heteroskedasticity?

A

Plotting the residuals from the regression against the estimated dependent variable to see if the spread of residuals seems to depend on the variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the formal way of detecting Heteroskedasticity?

A

Regressing the squared residuals (u^2) on predicted values (X̂) or explanatory variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What differs the White test from Breusch-Pagan?

A

The White test residual regression is for all pairs of independent variables too

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the 5 steps of the Breusch-Pagan test?

A

1.Estimate the model and obtain the residuals ^ui
2.Regress the squared residual on all independent variables
3.Formulate null hypothesis all coefficients =0
4.Compute LM=nR²
5.Check LM table, if LM>χ² reject null

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the point of General Least Squares (GLS)?

A

Transform the observation matrix [y X] so that the variance in the transformed model is I

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can you transform a regression function to homoskedastic when Ω is known?

A

Divide all the variables by σi, because you’re dividing each observation by something proportional to the error standard deviation for the observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the P matrix?

A

Matrix of 1/σi along principal diagonal and zeros elsewhere used to transform functions such that Py=y*

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the Cholesky root?

A

Ω⁻¹ = P’P

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you find the inverse of a diagonal nxn matrix?

A

Just take the inverse of the principal diagonal elements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the transpose of a diagonal nxn matrix?

A

Itself i.e. P=P’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In the transformed model y, what is the expected value of u?

A

E(u*) = PE(u) = 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

In the transformed model y, what is the variance of u?

A

Var(u*) = Var(Pu) = I

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How do you find ^βGLS?

A

Same process as OLS but with * values then sub in equivalent P values i.e. y*=Py

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How can you prove the ^βGLS of the transformed model is unbiased?

A

E(^βGLS) = β
Take the expectation of the function, sub in for y and expand out

17
Q

How do you find the variance of ^βGLS?

A

var(^βGLS)=(X’Ω⁻¹X)⁻¹
Sub in for y and expand out

18
Q

How can you show ^βOLS is less efficient than ^βGLS?

A

Derive both their variances and show OLS is greater

19
Q

When is GLS infeasible?

A

When Ω is unknown it has n(n+1)/2 elements and n observations so is impossible to estimate

20
Q

What is the Feasible GLS technique and how does it work?

A

When Ω is unknown we can create an estimated version ^Ω

21
Q

What are the 3 steps of Feasible GLS?

A

1.Estimate OLS to obtain residuals ûᵢ
2.Construct 2 groups of variance estimates
3.Proceed with GLS procedure using ^Ω

22
Q

How is Feasible GLS un/biased?

A

Feasible GLS is naturally biased, but it is consistent in that for large values of n it will converge to true value

23
Q

What are the 4 steps of GLS when you can’t split variance groups?

A

1.Get OLS residuals from original regression ûᵢ
2.Run auxiliary regression on squared residuals to get an estimate of γ
3.Use this to estimate ^V=V(γ)
4.Apply FGLS using V instead of omega

24
Q

What is gamma (γ) in the context of GLS?

A

Coefficient of known variable

25
Q

What are the 2 Least squares methods when heteroskedasticity is suspected but the variance matrix is unknown?

A

-Feasible GLS if variance matrix structure is known
-OLS using White standard errors if variance matrix structure is not known

26
Q

In GLS, what do you use instead of Ω if variance has a coefficient e.g. var(u)=σV

A

Use the coefficient instead of Ω

27
Q

What is the definition of a positive definite matrix?

A

A matrix is positive definite if we have vector b such that vector b’Ab≥0

28
Q

What is the result if you sum the variances of a homoscedastic error term from 1 to N?

A

Nσ²
Because variance for the error term of each observation is the same

29
Q

What is the formula for the variance of the error terms for a subset of observations (e.g. a certain country)?

A

Variance of the sum of each error term over the number of them (N)

30
Q

What does homoscedasticity imply for the variance coefficient?

A

It must be equal to I

31
Q

How can you show a model is unbiased?

A

When the expected value of beta is equal to the true value of beta

32
Q

What are the 3 characteristics of the GLS estimator?

A

Efficient, consistent, unbiased- regardless of whether data is heteroskedastic or auto-correlated

33
Q

Describe the 3 steps of GLS when the variance matrix is known

A
  1. Divide each y and x observation by σᵢ
  2. Run OLS on the transformed data
  3. Calculate variance matrix using transformed variables
34
Q

What 2 things does a normal distribution of error terms tell us u~N(0,σ²I)?

A

-Error terms are homoskedastic
-Error terms are independent

35
Q

How can you use the rule x’Ax ~ χ²(r), to prove if xi is Σx² ~ χ²(n)?

A

Set A=I and note that normal random variables are independent if they are uncorrelated