Chapter 6 - Univariate time series modeling Flashcards
(50 cards)
define univariate time series modeling
class of specifications where you attempt to describe changes in a random variable using only information contained in their own past values and possibly current and past values of an error term
what does time series modeling contrast with
Structural models. Structural models, like OLS, is multivariate in nature.
is time series modeling theoretical?
Typically a-theoretical, meaning that it is not common to base it on a theory. Meaning, you are not going to use financial theory to establish the structure of one of these models. Contrasts with structural models and their general-to-specific case, as it provide a way to model the theoretical foundations.
if time series mdoeling doesnt use known theory, then what does it do?
It is based on using empriical observations to extract patterns.
what benefit can tiem series mdoels give that structural models does not?
Out-of-sample estimations
what model is the goal of this chapter?
ARIMA
most important topic in time series modeling
Statonarity
why stationarity important
behavior and properties of stationary time series offer a make it or break it sort of case
elaborate on stationarity
two types:
1) Strict
2) weak
A strictly stationariy process is one where for every t1, t2, t3,….,tt, and any k, the joint probability distribtuion of t1,…tt is equal (identical) to t_(1+k), t_(2+k),…t_(t+k).
This entails that the probability of “y” falling in some specific interval is the same now as in any period.
Weakly stationariy is more observable and practical. Requirements are:
1) E(y_t) = mu
2) E((y_t - mu)(y_t - mu)) = sigma^2 < infinity
3) E[(y_t1 - mu)(y_t2 - mu)] = gamma_(t2-t1) for all t1 and t2
These three state: constant mean, constant variance, constant lag-l covariance. (constnat autocovariance structure).
elaborate on autocovariance structure
We refer to lag-l autocovariance to describe the autocovariance between a variance, and its same variable “l” lags earlier. This is constnat in stationary time series.
Thus, if we have a stationairy time series, we have that autocovariance between y_1 and y_10 is the same as between y_10 and y_20.
what is the autocovariance function+
E[(y_t - E[y_t])(y_(t-s) - E[y_(t-s)])]
this is lag-s autocovariance function.
elaborate on the interpretation of autocovariance
Nothing really, since it depends on the measurement of y_t. Therefor,e we use autocorrelation to relate meaning to it.
how do we go from autocovariance to autocorrelation?
Correlation is defined as covariance divided by variance. In time series, the variance is the lag-0 = gamma_0 autocovariance, so we can neatly represent it as:
corr_l = gamma_l / gamma_0
elaborate on acf
autocorrelation function.
when we plot all the lag-s autocorrelations up to some lag k, we can what we call the acf.
elaborate on a white noise process
structured random pattern.
Has the following properties:
1) E[y_t] = mu
2) var[y_t] = sigma^2
3) gamma_{t-r} = sigma^2 if r=t else 0
So, the white noise series is completely memoryless. Each observation is completely independent of all previous values it made.
If the y_t white noise series has mean 0 and fowllow a normal distribution, then the sample autocorrelation coefficients are approximatley n(0, 1/T).
This is very useful, because given these properties, we can test for whether a time series is white noise or not. Specifically, given a time series and a coefficient for autocorrelation, we can test it against having zero mean and 1/T variance using a simple ratio of
Elaborate on the Q-statistic
Q statistic is given by:
Q = T ∑t_k^2
where t_k is the autocorrelation coefficient for lag k.
This statistic is asymptotically chi squared distributed with degrees of freedom equal to the number of variables in the sum.
This stems frlom the assumption that the acf’s are normally distributed, so that a sum of their squares give chi squared variable.
The hypothesis (null) is that all are 0, as it is a joint test. If as much as one is significantly not zero, we reject the null, which basically means that we have evidence of it NOT being a pure white noise series.
weakness of Q-statistic
struggle from small samples.
another name for Q-statistic
Box-Pierce test
can we imrpove Box-pierce?
Ljung-Box.
makes it better in small samples. Asymptotically, they will converge.
what is portmanteau test
general test, that test multiple hypothesis at once
elaborate on using the fact that white noise autocorrelation has variance 1/T to make a test case for whether a single observed autocorrelation piece is insignificant or not
we’d derive this by starting at:
x = n(0, 1/T)
standardize:
(x-0)/sqrt(1/T) = n(0,1)
=> x = standard_normal_variable * sqrt(1)/sqrt(T)
=> x = s_n_variable * 1/sqrt(T)
And we can get the bound by figuring out what the standard nromal variable value must be in order to make it 95% probable to be achieved, which is 1.96. So, if we see that x takes on a value greater than 1.96*1/sqrt(T) then it is extremely large and likely not white noise according to the standard deviation of sqrt(1/T)
what is power in hyp testing
P(reject H0 | H1 is true)
introduce moving average process
linear combination of white noise series.
A variable y_t depends on current and previous values of the white noise process.
discuss the lag operator
Ly_t = y_{t-1}
L^{i} y_t = y_{t-i}
Also referred to as “backshift operator”