Testing for Serial Correlation Flashcards
(5 cards)
What is serial correlation?
Serial corelation (or autocorrelation) occurs when the error terms in a regression are correlated over time.
Cov(Ut, Ut-1) does not equal 0. Meaning there is a correlation.
Which OLS assumption does serial correlation violate?
This violates the classical OLS assumption that independent variables are uncorrelated with the error term. As a result: OLS remains unbiased, but SE’s are incorrect making hypothesis testing no longer valid.
No longer BLUE.
Procedure for testing Serial Correlation, H0, H1
1) Define original model, from this obtain the residuals (u).
2) Regress the residuals (u) on the original regressors and their lagged values.
Creating an auxiliary regression. (with some lagged variables in it)
3) BG tests (LM version) or F-test version
H0: a1 = a2 = … = ap = 0
(No serial correlation)
H1: At least one aj does not equal 0. (Serial correlation present).
u = a0 + a1x1t + … + a1ut-1 + … + aput-p + vt
The number of lags (p) should be chosen on the suspected order of autocorrelation.
What are the the two common tests called for serial correlation? And give the test statistic for each of them
Breusch Godfrey and Durbin Watson
BG test (LM Version)
LM = n * R^2 (auxiliary) ~ X^2 (p)
F-test version
F = (R^2(aux)-R^2(orig))/p
/
(1-R^2(aux))/(n-k-p-1)
Issues and Considerations for when testing for serial correlation…
Serial correlation is common in time series data, especially if variables are persistent.
Serial correlation causes OLS SE’s to be too small, affecting t-statistic results (too large), creating false positives of significance.
If serial correlation is detected then use the Newey West robust standard errors.