L13 - Testing for serial correlation Flashcards

1
Q

How do you derive the Durbin Watson Test statistic for serial correlation?

A
  • Under the null hypothesis that there is no autocorrelation we have E(DW)=2
  • If there is positive autocorrelation then E(DW)<2. The critical bounds are as given in the tables i.e. dU and dL.
  • If there is negative autocorrelation then E(DW)>2. The critical bounds are calculated as 4-dU and 4-dL.
  • Note that DW is bounded between 0 and 4.
  • The values given in the tables are for a one-tailed test. For a two sided alternative we must double the significance level e.g. what appears as 5% in the tables will actually be for a 10% significance level.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

When do you accept/reject the null of a Durbin Watson Test statistic?

A
  • e.g. H0: ρ = 0 and H1: ρ > 0

we reject if DW is greater than upper bound

we accept if DW is lower than the lower bound

invalid if DW is between the bounds –> cant make a decision and is called the region of uncertainty/indeterminacy

–> this is the draw back of the Durbin Watson test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is Durbin’s h test?

A
  • for both Durbin Watson and Durbin’s h test we are only looking at 1st order autocorrelation coefficient

In this test ρ(hat) = 1st order autocorrelation coefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the Breusch-Godfrey test?

A
  • used when models have more than one autocorrelation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the Box-Ljung test?

A
  • also called the Q statistic

j is the autocorrelation with the lag of j

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is another name for tests that test for serial correlation?

A

Tests of this type, in which the serial correlation can be of a very general type, are sometimes referrred to as portmanteau tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the effect of autocorrelated errors on OLS estimation?

A

Example: OLS with AR (1) error

Yt = βXt + ut

ut= ρut-1+ εt

  • OLS is unbiased because the proof of unbiasedness does not depend on GM2.
  • However, the proof of the Gauss-Markov theorem does depend on GM2.
  • Therefore OLS is no longer BLUE. It may be possible to find

Basically if autocorrelation errors do appear, it doesnt necessarily mean OLS is unbiased, but it will no longer be BLUE is GM2 is not met

  • more efficient estimators.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the effect of autocorrelated errors on coefficient standard errors?

A

AR(1) –> first order autocorrelation error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why when Gauss Markov 2 doesnt hold do we get errors in the standard error?

A
  • If we expanded the variance of β(hat) we would get the second equation, if GM2 holds all the cross products should cancel out and we will be left with the variance formula
  • However, if GM2 does not hold then these cross product terms will not be zero.
  • This is what leads to bias in the OLS standard error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What happens when the X values are autocorrelated?

A

Xt is autcorrelated by a coefficent of Φ with the last the last period’s value Xt-1

  • therefore

REVIEW SECOND TO LAST SLIDE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly