Week 2 Flashcards

(27 cards)

1
Q

What is autocorrelation? Verbal and mathematical

A

Relationship between variable and its lag
E(εiεj) = σij, i!=j
E(εε’) = Ω = matrix of all different σij -> symmetric and positive definite

Also possible heteroskedasticity

Can see in plot of residual against residual lag - correlated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the properties of OLS under autocorrelation?

A
  • Unbiased
  • Consistent
  • Inefficient
  • Incorrect SE
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What estimators should we use if autocorrelation?

A

Newey-West Estimator Covariance Matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Where do the weights in the Newey-West estimator come from?

A

The Kernel function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the properties of Newey-West SE?

A

HAC (Heteroskedastic and autocorrelation consistent)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the idea of the Cochrane-Orcutt procedure?

A

Errors are from an autoregressive model of order 1
You have the normal regression, the lag regression, and the regression of error term with lag
Find value of y(i) - py(i-1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the alternatives to Cochrane-Orcutt procedure?

A

NLS for y(i) = py(i-1) + B1(1-p) + B2(xi-px(i-1)) + n(i)

In Eviews, AR(1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the idea of GLS?

A

Transform data s.t. the conditions for efficient OLS hold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the Choleski decomposition?

A

PP’ = Ω

Transformed data: y* = P^(-1)y ; X* = P^(-1)X ; ε* = P^(-1)ε

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Properties of GLS disturbances

A

Homoskedastic and no autocorrelation

=> OLS for transformed model efficient estimator for β

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Compare GLS to Cochrane-Orcutt

A
  • In GLS 1st observation is included

- In GLS scaling factor 1/σ(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

GLS estimator + expected value and variance

A
b = (X'Ω^(-1)X)^(-1)X'Ω^(-1)y
E(b) = β
Var(b) = (X'Ω^(-1)X)^(-1)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why do we need feasible GLS?

A

In practice often Ω unknown and have to estimate it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the steps of FGLS?

A

1) Estimate the covariance matrix
a) Apply OLS in y=Xβ + ε -> b consistent
b) Estimate Ω using residuals e = y - Xb : Ω^ = ee’
2) Apply OLS on the transformed data
a) Use Ω^ to determine P^
b) Transform data with P^^(-1) : y=P^^(-1)y and X=P^^(-1)X
c) Estimate β with OLS in the model for the transformed data: y* = Xβ + ε -> b(FLGS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the null hypothesis of the autocorrelation tests?

A

No autocorrelation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the equation for the autocorrelation of residuals?

A

r(k) = Σe(i)e(i-k)/Σe(i)^2 : first sum is from i=k+1 to n; second sum i=1 to n

17
Q

Durbin-Watson test

A

DW = Σ(i=2->n) (e(i) - e(i-1))^2/Σe(i)^2 ≈ 2(1 - r(1))
0 (r(1) = 1 => perfect correlation)
4 (r(1) = -1 => perfect negative correlation)
H0: Value should be around 2

18
Q

What are the disadvantages of the Durbin-Watson test?

A
  • Distribution under H0 depends on the properties of regressors
  • Not applicable when lagged dependent variables are included as regressors
19
Q

Box-Pierce Test

A

H0: No autocorrelation

BP = nΣ(k=1 -> p) r(k)^2 ≈ χ2(p)

20
Q

Ljung-Box Test

A

LB = nΣ(k=1 -> p) (n+2)/(n-k) r(k)^2 ≈ χ2(p)

21
Q

What type of test is a Breusch-Godfrey test?

A

Lagrange Multiplier (LM) test

22
Q

Which is the procedure for the Breusch-Godfrey test?

A

1) OLS on y(i) = x(i)’β + ε(i)
2) Run auxiliary regression
3) Under H0 (no autocorrelation) have nR^2 ≈ χ2(p)

23
Q

What is the main difference between the Box-Pierce and Ljung-Box test?

A

The Box-Pierce test is an approximated version of the Ljung-Box test

24
Q

What happens to the significance of the parameter of the independent variable with NW SE?

A

The significance may change

25
What happens to the marginal effect of the independent variables with NW SE?
Doesn't change
26
Do NW SE automatically correct for possible heteroskedasticity?
Yes
27
Do NW SE not harm i.e. should they always be used?
True - Ω = σ^2*I | False - We add uncertainly, less efficient