Chapter 3 - Basic concepts - Part B Flashcards

1
Q

AR(p) is given by :

A

yt = φ1yt−1 + φ2yt−2 + ··· + φpyt−p + εt, (1) where φ1,φ2,…,φp are unknown parameters, and εt is a standard
white noise process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

MA(q) is given by

A

yt = εt + θ1εt−1 + … + θqεt−q. where θ1,θ2,…,θq are unknown parameters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

ARMAX( p, q) is given by

A

yt = α+φ1yt−1 +···+φpyt−p+ εt + θ1εt−1 + … + θqεt−q
(+β1x1,t + β2x2,t + · · · + βkxk,t),
where xi,t, i = 1, . . . , k denote (exogenous) regressors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How to implement ARMAX models in practice?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

We estimate AR(p) with

A

OLS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

We estimate MA(p) with

A

nonlinear least squares or maximum likelihood.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Remember, b_ols =

A

see formula slide 6.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Derive the unbiased and the variance of b_OLS and give the assumptions that were made.

A

E[b_OLS] = B
the diagonal elements of
V[bOLS] = (X′X)−1X′ΩX(X′X)−1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Standard errond of b_OLS are the square roots of

A

the diagonal elements of
V[bOLS] = (X′X)−1X′ΩX(X′X)−1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

If εt is homoskedastic (E[ε2t ] = σ2 for all t) and uncorrelated (E[εtεs] = 0 for all t and s), Ω = σ2I, such that

A

Then V [bOLS] = σ2(X′X)−1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

If εt is heteroskedastic (E[ε2t ] = σt2) and uncorrelated, we have Ω = diag(σ1^2,σ2^2,…,σt^2), such that

A

see formula slide 9.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a difference between the regressors of the classical linear regression model and an AR(p) model

A

The regressors of AR(p) are stochastic opposed to fixed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does it imply when does it imply that regressors are stochastic?

A

It means that exact finite-sample results which hold in the ‘classical’ linear regression model yt = x′tβ + εt with fixed xt’s do not hold in the time series context.
⇒ Asymptotic results continue to hold, however.
For example, the OLS estimator of φ1 in the AR(1) model yt = φ1yt−1 + εt, is not unbiased but remains consistent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

suppose a series yt is generated from the AR(1) model, yt = φ1yt−1 + εt,
What is the distribution of the OLS estimator of φ1?

A

See derivation.
√T(^φ1−φ1)∼N(0,σ^2γ0^-1 ) (ASYMPTOTIC relation!!)
Asymptotic approximation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The exact small-sample distribution can be obtained by means of …..:

A

a. Monte-carlo simulation
b. see slide 15.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Two ways of model selection

A
  • model selection criteria based on in-sample fit
  • out-of-sample forecasting
17
Q

Give two model selection criteria

A

AIC : AIC(k) = T log σˆ2 + 2k,
SIC : SIC(k) = T logσˆ2 +klogT
=> Select ARMA orders p and q that minimize AIC(k) or SIC(k)

18
Q

Misspecification tests and diagnostic measures

A

Many tests aim to test whether the residuals of the ARMA model statisfy the white noise properties E[ε2t ]= σ2, and E[εtεs] = 0,

19
Q

Misspecification tests : test of no residual autocorrelation

A

see slide 18 and 19

20
Q

Misspecification tests : test for homoskedasticity

A

often based on autocorrelations of squared residuals.
⇒ If rejected, standard errors of parameters should be adjusted or heteroskedasticity should be modelled explicitly.

21
Q

Misspecification tests : test of normality

A

see slide 20

22
Q

3 different type of forecasting

A
  1. A point forecast of y_T+h
  2. An interval forecast : (^L_T+h|T,^U_T+h|T)
  3. A density forecast : f(yT+h|YT)
23
Q

optimal h-step ahead point forecast depends on a …

A

loss function
We should use the point forecast ^y_(T+h|T)that minimize the expected value of the loss function.
The form of the loss function depends on the variable that we are forecasting.

24
Q

In many cases, the relevant loss function is diffucult to specify. Thus, we use the forecast error A . Most often, we assume that the forecast user has a B , that is C. Minimising C, or the D, we find the optimal point forecast is the E of E, that is F.

A

a. e_(T+h|T) = y_(T+h|T) − ^y_(T+h|T)
b. squared loss function.
c. Loss_(T+h|T) = e_(T+h|T)^2.
d. Mean squared prediction error [MSPE].
e. Conditional mean of y_(T+h|T)
f. ^y_(T+h|T) = E[ y_(T+h|T)|Y_T ]

25
Q

Derive the optimal point forecast of y_T+1 for an AR(1) model,
Given :
- E[εt|Yt−1] = 0
- E[εt^2 |Yt−1] = σ^2

A

see slide 25.
^y_(T+1|T) = E[Y_T+1 |Y_T ]
= E[φ1yT + ε_(T +1)|Y_T ]
= φ1yT

26
Q

Derive the relationship between e_T+1|T and ε_T+1 in the one-step ahead point forecast in the AR(1) model.

What conclusions can you draw?

A
  • e_T+1|T = y_(T+1) − ^y_(T+1|T) = y_T+1 −φ1y_T = ε_(T+1)
  • Hence, the variance of the forecast error V[e_(T+1|T)] is equal to σ^2, which is the variance of εt and also the conditional variance V[y_(T+1)|Y_T].
27
Q

what can you see about two steps ahead in the point forecast of a AR(1) model?
and 3 steps ahead?

A

see slide 27-28.

28
Q

Generalise the point forecasts of the AR(1) model for h-steps ahead.

A

See slide 29.

29
Q

Consider the AR(1) model with intercept and with E[ε_t|Y_t−1] = 0 and E[ε_t^2 |Y_t−1] = σ^2.

What are the optimal point forecast of y_T+1, y_T+2, and for 3 steps ahead?

What is the general h-steps ahead point forecast?

what happens when h converges to infinity (assuming |φ1| < 1) ?

A

see slide 30.

30
Q

The converge of point forecasts (with intercept) when the forecast horizon increases is even easier seen by rewriting the AR(1) model as A.
Using this representation, derive the conclusion that when h increase AR(1) model with intercept converges to the B.

A

a. yt −μ = φ1(yt−1 −μ)+εt
b. unconditional mean.
see slide 32 for the derivation.

31
Q

What is the effect of estimation uncertainty?

A

In practice, the true value(s) of the model parameter(s) are unknown. Instead, we have to use estimated parameters.
^y_(T+1|T) = ^φ1y_T.
Hence, e_(T+1|T) = y_(T+1) − ^y_ (T+1|T)
=φ1y_T + ε_T+1 − ^φ1y_T
= ε_(T+1) + ( φ1 − ^φ1 ) y_T .

32
Q

What is the effect of estimation uncertainty in point forecasts?

A

Mostly affects the variance of the forecast error V[e__(T+1|T)].
see slide 35.

33
Q

What are the effects of model misspecification in point forecasts ?

A

see slide 37.

34
Q

How to evaluate point forecasts?

A

Two ways, absolute or relative evaluation.
Absolute evaluation: what is the quality of forecasts form one specific model?
Relative evaluation: wha tis the quality of the forecasts form multiple competing models, relative to each other.

35
Q

What are the 3 desirable properties of the point forecasts ?

A
  1. Unbiasedness : forecast errors have the zero mean E[e_(t+1|t)] = 0 => Straightforward to examine by testing the mean of the forecast errors differs significantly form 0.
  2. Accuracy: MPSE should be as small as possible.
    Recall that the point forecast ^y_(t+1|t) (usually) is taken to be the one which minimizes
    E[e_(t+1|t)^2 ] = E[(y_(t+1) − ^y_(t+1|t) )^2]

Note that the MSPE can be decomposed as
… = variance + squared bias

  1. Efficiency / optimality: it should not be possible to forecast the forecast error itself with any information available at time t. see slide 44.
36
Q

Evaluating point forecasts

A

see slide 42-43

37
Q

Comparing predictive accuracy

A

slide 45

38
Q

How to construct density forecasts?

A

slide 47

39
Q

How to construct interval forecasts?

A

slide 48