4th year Flashcards

(16 cards)

1
Q

define stationarity

A

A time series {yt} is said to be stationary if

(i) the mean of yt does not depend on t and
(ii) the covariance between yt and yt+s is a function of s only.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

why is stationarity a useful assumption

for the estimation of the covariance between yt and yt+s.

A

This assumption is need to estimate the common mean by averaging y1, . . . , yn,
and the common covariance between yt and yt+s by averaging over all such pairs
of data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

system equation and observation equation for the Kalman filter

A

The Kalman filter has
System equation: xt = F x_t−1+v_t, t = 1, 2, . . .,
Observation equation: yt = Gxt + wt, t = 1, 2, . . ..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Statistical properties of error terms in Kalman filter

A

The assumptions on the error terms are
{vt} are i.i.d. multivariate normal with mean vector 0 and cov matrix Q;
{wt} are i.i.d. multivariate normal with mean vector 0 and cov matrix R;
{wt} are independent of {vt}.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What problem does the Kalman filter solve

A

The Kalman filter solves the problem of finding [Bookwork]
xˆt|t
= E[xt|y1, . . . , yt] and Pt|t = var{xt − xˆt|t}.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

how does Kalman filter solve the problem

A

It solves the problem recursively by finding

xˆt|t−1= E[xt|y1, . . . , yt−1] and Pt|t−1 = var{xt − xˆt|t−1} first in each iteration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What other models (than the ARMA type) can be fitted using the Kalman filter?

A

Example here is CAPM with time-varying beta. Other examples include random
walk plus noise model and local linear trend model.
Dynamic linear model: yt = at + btxt + εt, εt ∼ iid N(0, σ2), at = at−1 + vt , vt ∼ iid N(0, σ2a), bt = bt−1 + wt, wt ∼iid N(0, σ2b), {εt}, {vt} and {wt} are independent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define σ^2-t for a GARCH model

A

Definition. σ2t = Var{yt|yt−1,yt−2, . . .}.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Assumptions on {ε_t} for GARCH

A

It is assumed that {εt} are iid with mean zero and variance one, and that εt
is independent of yt−1, yt−2, . . ..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Autocorrelation

A

ρ(s) = corr{yt, yt+s}, s = 0, ±1, ±2, . . . .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Partial autocorrelation

A

The partial autocorrelation function is φkk, k = 1, 2, . . ., where φkk is the coeffi-
cient of yt−k in the best linear predictor of yt
in terms of yt−1, yt−2,. . . , yt−k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe Kalman smoothing

A

Kalman smoothing works backwards from t = n to t = 1 to give
xˆt|n= E[xt|y1, . . . , yn] and Pt|n = var{xt − xˆt|n}.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define σ^_t in GARCH model

A

In the GARCH(1,1) model, σ

2tis the conditional variance of yt given yt−1, yt−2,. . . (infinite past).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Why is the Kalman filter used in maximum likelihood estimation of ARMA models?

A

The Kalman filter is used in ML estimation of ARMA models because it gives
as by-products et = yt − yˆt|t−1 for t = 1, 2,…,n which are uncorrelated and have
the same log-likelihood as y1,…, yn.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the reduced log-likelihood of the Kalman filter

A

The reduced log-likelihood is the log-likelihood of y1,…, yn given the AR and
MA parameters, with σ2
replaced by its optimal solution in terms of φe
and θe.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How is the reduced log likelihood maximised using the Kalman filter?

A

To maximise a function, it must be calculatable given the variable values. The
Kalman filter makes the (reduced) log-likehood function calculatable by supplying et’s and τt’s based on the parameter values.