Chapter 6 - Brooks Flashcards
(24 cards)
what does it mean that we are looking at univariate time series
Univariate restricts us to look at infomraiton contained in past values of the own variable and also reactions to the error term.
Therefore, univariate models contrasts with structural mdoells, which are multivariate in nature.
how does time series models and theories go together
Usually they dont. Time series is more of an empirical process, where we are looking for empirical evidence to createa fitting model.
Linear regression, on the other hand, is typically about testing a financial theory and applying it to see if it holds.
downside with structural models
tendency to not do very well on out-of-sample forecasting/prediction
define a white noise sereis
constant expected value, constant variance, and very importantly: Autocovariacen and autocorrelation must be 0 for all lags.
the most important outcome of the white noise sereis definition
Completely independent series
what can we say about the ACF of a white noise series
if we assume that the y_t sereis is normally distributed, then we get that the ACF’s are approximately normally distributed wiht mean 0 and variance 1/T.
why do we care about the fact that if y_t is assumed normally distributed, its ACF has mean 0 and variance 1/T if it is a white noise sereis?
If we have a time series, we can compute the sample ACF, and then apply testing to see if the ACF’s are statistically significantly differnet from 0 or not under the normality assumption.
This can therefore be used to test whether a series behaves as a white noise series or not.
give the quick confidence test for white noise process
We find ACFs, and use teh bound: +- 1.96 x 1/(sqrt(T))
This computes the bound. If the ACF if larger than this for some lag(s), it means that it is very likely not white noise.
what is the quick-way portmanteau way to test whether a sereis is white noise or not
use box-pierce, the Q-statistic.
Q = T∑p_l^2
Under the null, p is normally distributed. If the ACF was all sandard normal, we would get chi squared with m degrees of freedom. But, the variance is not 1, it is 1/T. Therefore we multiply by T.
derive teh box-pierce from scratch
We start by wanting to test whether the time series if white noise or not. We do this by testing the ACF, because if ACF is not statistically different from 0 for all lags, we have behavior that correspond to white noise.
we find the sample ACF for lags up to some number m.
The assumption is that the ACF values are normally distributed with mean 0 and variance 1/T.
when we square the ACF, we get: p_l^2.
This is actually a combinaiton of a standard normal variable and a constant that comes from scaling the standard deviation.
p_l^2 = (p_l)^2 = (std_normal x std_dev)^2
= (std_normal x sqrt(1/T))^2
= (std_normal)^2 x (1/T)
= chi_squared_{1} x (1/T)
So, the squared sample ACF is a chi squared distributed random variable with 1 degrees of freedom, multiplied by 1/T.
Therefore, we know that ∑p_l^2 ~ chi_squred_{m} (1/T)
to get it clean, we muliply by T.
the outcome is that:
T∑p_l^2 ~ chi squared distributed with m degrees of freedom.
what is the thign we need to be aware of when considering tests like the Box-Pierece
only one ACF needs to be statistically singificant for the entire test to be rejected. But it doesnt tell us which.
elaborate on Box-Pierce in small samples
biased. iti s better to use Ljung-Box. e
elaborate on Wold’s decomposition theorem
Any stationariy series series can be decomposed into two unrelated processes.
1) a purely determinstic part,
2) a purely stochastic part, which will be an MA(infinite) series.
elaborate on the definition of Campbell
Broadly, he defined a non-linear data generation process as:
y_t = f(u_t, u_{t-1}, …)
where the value of the series is related non-linearly to teh current and past values of the error term.
More specifically, he defined:
y_t = g(u_{t-1}, u_{t-2},…) + u_t sigma^2 (u_{t-1}, u_{t-2})
g is a funciton of past error terms only, and sigma^2 is a variance term.
sigma^2(…) is a function. It tells ut the variance basically. Then the variance is multiplied by the current error term. this is what ultimately ends up producing the uncertain shit we are working with.
we can use this and have different types of properties.
Some models can be linear in both g (mean) and sigma (variance).
Some models can be non-linear in g (mean) and linear in sigma(variance).
Some are non linear in variance and linear in mean.
Some are non linear in boht mean and variance.
what should we do for model checking
Use ljung box on the reisudals to see if they behave as white noise or not. If they do not, it means that we are not accounting for all the patterns.
Name examples of models that are linear in both the mean and variance
CLRM, ARMA
name examples of models that are linear in mean, but non linear in variance
GARCH
why do GARCH models not account for asymmetries
They square the residuals. the sign is lost.
elaborate on leverage effects
A fall in stock price cause equity value to drop. Debt remians the same. Shareholder now perceive the equity as more risky.
how can we model in asymmetries
GJR or EGARCH
elaborate on GJR
It is a regular GARCH model, but we add another term to account for the sign.
Looks like this:
GJR(1,1,1)
sigma_t^2 = a_0 + a_1 u_{t-1}^2 + a_2 sigma_{t-1}^2 + a_3 B u_{t-1}^2,
where B is 1 if u_{t-1} is less than 0, B is 0 otherwise.
This allows us to create a normal GARCH(1,1) model, and just add a term that can account for the leverage effects. If they are present, they can be modeled in. they will also only kick in if shocks are negative.
introduce testing under ML
The core idea is that we are looking at whether the LLF value drops signidicantly or not when we add some restriction to the model.
By comparing LLF values of an unrestricted model and restricted model, we can see whether the likelihood function agrees with the decision that hte constraint applies or not.
If the LLF does not move for the restricted model, it means that the constraints are supported by the data.
there are 3 tests that are ultimately based on ML:
1) LR
2) Wald
3) Lagrange multiplier
elaborate on LR