Chapter 3 - Conditional heteroskedastic mdoells Flashcards
(48 cards)
what is the objective here
Modeling volatility of asset returns
important to understand about volatility
Not observable. We can only estimate it
what is implied volaitlity
Volatility output from BS model
weakness of IV
relies on model assumptions, like geometric brownian motion
elaborate on the characteristics of volatility
1) Tendency to cluster
2) jumps are rare
3) mean reverting (generalyl speaking stationairy)
4) leverage effect (volatility react different to a price increase of X vs price decrease of X)
elaborate on the foundation of volatility study
The foundation is that volatility of log returns are serially uncorrelated, or with only minor lower order correlations, but at the same time volatility is a dependent series. This means that there is a structure there, but the structure is not in regard to serial correlations (autocorrelation).
By the 4 characteristics of voaltility, we know that there is structure there. We want to capture this structure.
Empirically, we can view the dependency in the series by plotting the ACF of magnitudes (abs or square) of log returns.
what can we say about the equation for conditional mean of log returns
should be simple, as there is no evidence of a pattern
discuss the 4 steps in building a volatility model
1) specify a mean equation by testing for serial dependencies in the data. Build a model to remove linear dependencies from the return series.
2) use the residuals of the mean equation to test for ARCH effects
3) if ARCH effects are present, specify a volatility model.
4) check the fitted model and refine if necessary
what is ARCH effects
conditional heteroskedasticity
elaborate on conditional heteroskedasticity, and regular heteroskedasticity
the difference is that heterskedasticity refers to that variance is not constant. Variance has a structrure.
Conditional heteroskedasticity refers to a variance structure where variance depends on past informaiton in general.
what is this:
“The expected value of the squared difference between a value and the mean of that value”
Variance
elaborate on what we are actually trying to do in this chapter
WE want to build a model for the conditional variance. In the simplest form, conditional variance is given by:
sigma_t^2 = Var(r_t | F_{t-1})
We know that variance of r_t is the same as “E[(X - E[X])^2]”, which in our case is E[(r_t - mu)^2]
For the mean, we can make use of the fact that the mean equation should be very simple. Perhaps just constant, since this is asset returns we are talking about.
r_t = mu + a_t
Subbing this into the variance equation:
sigma_t^2 = E[(r_t - (r_t - a_t))^2 |F_{t-1}]
sigma_t^2 = E[(a_t)^2 |F_{t-1}]
So, in the simple scenario of mean equation being only the constant term plus error, we get that the variance equation, the conditional variance equation is equal to the conditional expectation of the squared shock/error term.
Because of how the expected value of a_t is 0, this expression is a collapsed version of variance, which gives us final result of :
sigma_t^2 = var(a_t |F_{t-1})
it should be immediately obvious what this result entails. It entails that we can change the perspective from estiamting variance of return series to estimating variance of the shock series.
GARCH models belong to one category of models. What is this category, and what is the other category?
GARCH belong to the category that use an exact function to govern how sigma_t^2 change.
The other category is a stochastic process
what do we use to test for ARCH effects
We use the squared residuals. The residuals are the results from using the following mean equation:
r_t = mu + a_t
and solvign for a_t
a_t = r_t - mu
elaborate on testing for ARCH effects
there are 2 types of tests, but both use the squared residuals. The first test is Ljung Box on the squared residual series. If the first m lags have zero ACF, then there is no evidence of ARCH effects. However, if one of them have non zero ACF (statisticaly significant ACF) then we reject the null hypothesis and claim that there is ARCH effects present.
Recall that ARCH effects refer to there being conditional heteroskedastic patterns, meaning that the variance is dependent on previous levels. This specific test is about the square.
The other test is the ARCH-LM test.
elaborate on conditional heteroskedasitcity vs ARCH effects
conditional heteroskedasticity is a broad class of all patterns that have some sort of structure where variacne depends on earlier information.
ARCH effects, GARCH effects, ARCH-X effects etc are examples of types of dependencies. For instance, ARCH, which is short for Auto Regressive Conditional Heteroscedasticity, includes a structure of variance being dependent on past squared shocks.
ARCH effects specifically refer to cases where the conditional heteroskedasticity structure is dependent on the square values of earlier shocks. An ARCH process strictly produce the structure where past shocks squared along with coefficients to give their contributions assign a new level of variance. If the process is pure ARCH, or have ARCH tendencies, some of this structure can be captured by using an ARCH model. Also, we can verify the presence of ARCH process (ARCH effects) by using either Ljung Box or ARCH-LM, as both make use of squared residuals
do we have var(r_t | prior) == var(a_t | prior) for all kinds of mean equaitons?
Yes.
what is a_t? where does it come from
shock. residual. it comes from the mean equation.
what do we mean by “a_t = sigma_t epsilon_t”
it tells us what we assume about the residual.
We assume that the residual is equal to the standard deviation/volatility multiplied by some randomness. epsilon is iid white noise.
elaborate on ARCH model
it assumes 2 things:
1) shocks comes from the play between volatility and white noise randomness: a_t = sigma_t eps_t
2) sigma_t^2 = alpha_0 + alpha_1 a_(t-1)^2 …
why do we need the a_t = sigma_t eps_t part?
We need to establish how the data is generated. At the end of this is a recursive structure that use white noise.
if we do not include it, we will only have a model saying that “if this, then that happens”. However, when defining what the ARCH model assume, we are talking about the sort of process that it assumes. And specifically, it assumes that a shock is a result of volatility at the same level multiplied by white noise.
And the reason why specifically this assumption is included, is that it gives us E[a_t | prior] = 0 AND
Var[a_t] = sigma_t^2 (because epsilon has variance 1 and is uncorrelated).
elaborate on what ARCH believe
we build some model for our return series, for instnace “r_ t = mu + a_t”. Under the assumption of ARCH process, a_t is a result of an interplay between the volatility at that current time t and some random component. The random component basically ensure that we can never know for sure what happens, meaning that it is not a purely deterministic process. So this basically means that the asset return, r_t, can be understood as given by some mean component mu, and some degree of volatility. ARCH believe that when the return series is generated, it is always produced as a sum of the mean level and some sort of volatility oscillation around this level.
Also, it is worth noting why ARCH believe/assume the specific relationship between shocks and sigma and epsilon.
Var(a_t) = var(sigma_t eps_t) = sigma_t var(eps_t) = sigma_t *1 = sigma_t
rmeember that sigma_t is a constant.
why is it nice that ARCH establish that the variance of the shocks are equal to sigma_t^2?
because we know from before that variance of returns and variance of shocks are the same thing when conditioned upon all prior information.
Therefore, when the ARCH process assume that the shocks are generated according to sigma_t eps_t, it is equivalent to say that ARCH process assumes that the asset return series is generated with volatility sigma_t.
This is why it makes a lot of sense to include the data generation process with the ARCH process. If we dont include it, then we have no basis to use.
important but subtle point regarding hte shcoks in ARCH model
the current shock is not included. This makes the model recursive inherently. because of this, it makes sense to define the additional data generation thing as well, and not making the argument be circular.