2. Time Series Econometrics Flashcards

1
Q

What is a time series process?

A

A set of temporally ordered observations on a variable y taken at equally spaced discrete intervals in time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What has to be true for a stochastic process, y, to be covariance stationary?

A

Each yt has the same mean and variance, and the covariance between yt and yt-1 depends only on the separation, not on t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does strict stationarity require?

A

The joint PDF of yt-s, yt-s+1, …, yt is identical to yt-s+k, yt-s+k+1,…, yt+k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a disadvantage of the autocovariance function?

A

It isn’t scale invariant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the Autoregressive Moving Average process (ARMA)

A

A linear function of lags of itself and of Ęt and lags

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are some properties of Ęt?

A

-E(Ęt)=0
-V(Ęt)= ó^2
-C(Ęt, Ęt-1) =0
Hence Ęt is itself a covariance stationary process called white noise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What do the mean, variance and autocovariance of the MA(q) process depend on?

A

They don’t depend on t, therefore the process is stationary for any value of theta

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

When is the AR(p) process stationary?

A

When |phi| <1 since the variances and autocovariances don’t depend on t when this is the case

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What happens to the non-zero autocorrelations in the AR(1) process when S increases?

A

The autocorrelations decay to zero because |phi|<1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does the lag operator do?

A

Lxt = xt-1
It shifts the period back one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can we use the lag operator?

A

As a simple way of moving between MA and AR representations of the ARMA process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which ARMA(p,q) processes can be written as MA(infinity) or AR(infinity)?

A

Any stationary and invertible ARMA(p,q)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does white noise refer to?

A

A process whose autocorrelations are zero at all logs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

If an ARMA(2,1) process has identical autocorrelations to the AR(1) process, what can we say about the ARMA(2,1) process?

A

It is overparameterised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Equation for random walk

A

yt= yt-1 +Et

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is f(yt | Yt-1)?

A

The density of yt given that we know everything up to time t-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is lnf(Yt) when y1 isn’t fixed?

A

The prediction error decomposition form of the log likelihood function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is lnf(Yt) when we fix y1?

A

The conditional log likelihood function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the CSS?

A

The conditional sum of squares. It squares then sums the deviation of each yt from its conditional mean phi yt-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How can we make finding the MLE easier?

A

We can find the simpler conditional log likelihood function by treating y1 as fixed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What trick do we use to find the MLE of a MA(1) process?

A

We invert it to a AR(infinity) representation since we can’t observe the unobserved errors in the MA(1)

22
Q

What can we say about the order of an underlying stochastic process if rho(s) is truncated after x number of lags?

A

It is an MA(q) process where q=x

23
Q

What can we say about the order of an underlying stochastic process if phi(ss) is truncated after x number of lags?

A

It is an AR(p) process where p=x

24
Q

What can we say about the order of an underlying stochastic process if rho(s) and phi(ss) both did out slowly?

A

It is an ARMA process but we don’t know p and q, we should initially start with ARMA(1,1) and go from there

25
Q

What are the residuals for a pure AR model?

A

They are simply the usual OLS residuals

26
Q

If we have fitted the correct model what will our Êt (residuals) look like?

A

They will resemble the true residuals Et and consequently behave like white noise

27
Q

If an AR(1) model is fitted to data generated by an MA(1) process, how will the modelled residuals Êt behave?

A

They will behave like an MA(2) process

28
Q

What does the residual autocorrelation function do?

A

We can use it to indicate the direction of the model misspecification, although it isn’t a perfect measure

29
Q

What can we conclude if the residual autocorrelations are insignificant?

A

That there is no model misspecification

30
Q

What can we conclude if an AR(1) model is fitted to data and the residuals have two significant autocorrelations?

A

This would indicate that an MA(1) component has been omitted

31
Q

What can we conclude if an AR(1) model is fitted to data and the residuals have significant autocorrelations that die away slowly?

A

Could indicate that a second AR component has been omitted

32
Q

Why isn’t the residual autocorrelation function perfect?

A

Because rho “tilda” doesn’t share quite the same distributional properties as rho “hat”

33
Q

Drawbacks of the Portmanteau statistic

A

r>p+q needs to be selected and different choices for r can yield different inferences about the presence of model misspecification.
It obscures possibly valuable info about the direction of misspecification contained in the individual rho tilda

34
Q

When should info criteria be used?

A

To choose between competing models that appear to satisfy the null hypothesis of no model misspecification

35
Q

What are the two more common info criteria?

A

Akaike info criterion (AIC)
Schwartz- Bayes info criterion (SBIC)

36
Q

How do we judge AIC and SBIC?

A

The smallest value is best

37
Q

What is choosing the model with the fewest parameters called?

A

Principle of parsimony

38
Q

Which info criteria imposes a harsher penalty function?

A

SBIC

39
Q

Why are ARMA models inherently difficult to estimate and when is this particularly the case?

A

They are hard to hard to estimate because of non linearity associated with ML estimation. This is particularly the case if sample size T is small or the MA order is high

40
Q

In the case of forecasting what is S?

A

The forecast horizon

41
Q

What does ŷT+s | YT denote? (T and S are subscript)

A

Denotes the predictor of YT+s based on YT (ie uses info only up to time T

42
Q

In forecasts what is the prediction error given by?

A

YT+s - ŷT+s | YT

43
Q

What is the general optimal, (as in minimum forecast MSE) forecast of yT+s?

A

The conditional mean of yT+s conditional on time T. This has a forecast MSE equal to V(yT+s |T)

44
Q

In general what is the optimal predictor of yT+s|T and the MSE(ŷT+s|T) for an AR model?

A

ŷT+s|T= phi ^(s) yT

MSE(ŷT+s|T) = ó^2 x (1-phi^(2s))/ (1-phi^2)

45
Q

In general what is the optimal predictor of yT+s|T and the MSE(ŷT+s|T) for an MA model?

A

ŷT+s|T = -theta x ET when s=1,
and 0 when s>1

MSE(ŷT+s|T) = ó^2 when s=1,
And ó^2 x (1+theta^2) when s>1

46
Q

In general what is the optimal predictor of yT+s|T and the MSE(ŷT+s|T) for an ARMA model?

A

ŷT+s|T= phi x yT - theta x ET when s=1,
And phi^(s-1) x ŷT+s|T when s>1

MSE(ŷT+s|T)= ó^2

47
Q

What are optimal forecast functions similar to and why?

A

They are similar to ACF’s. The forecast function concerns the relationship between yt+s and yt whilst the ACF considers the relationship between yt and yt-s which for a stationary process is the same thing

48
Q

As s goes to infinity what happens to our prediction ŷT+s|T?

A

It goes to 0=E(yt)

49
Q

As s goes to infinity what happens to our prediction MSE(ŷT+s|T)?

A

MSE(ŷT+s|T) goes to V(yt)

50
Q

When do we need to aware of spurious regressions?

A

Whenever the variables involved in a fitted regression model might be suspected of being random walks, especially when T is large

51
Q

How is the autocorrelation function an improvement on the autocovariance function?

A

The autocorrelation function is scale invariant

52
Q

What is the equation for the autocorrelation function?

A

¥s/¥o
In words, the covariance of yt and yt-s divided by the variance of yt