Chapter 2 Asymptotic theory Flashcards

1
Q

Continuous mapping theorem

A

If we apply a continuous function to a vector that converged to another vector, it now converges to the same vector but applying the continuous function to it.
Note that the inverse is a continuous function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Slutsky’s lemma

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does Slutsky’s lemma apply to a matrix times rv?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Kolmogorov (strong) Law of large numbers

A

Average converges in probability to the mean IF Z_I IS IID

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Stationarity

A

The distribution is invariate over time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Ergodicity

A

The two groups become independent when n goes to infinity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Ergodic theorem

A

iid is replaced by the ergodic theorem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

LLN to CLT 1- Classical Lindenberg-Levy CLT

A

z_i is iid.
In this case, the dependence that ergodicity restricts is not enough for the classical LL-CLT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

LLN to CLT 2- Martingale Difference CLT

A

Martingale difference sequence means that there is no serial correlation. If we have this + SE, we have MD-CLT
Note: the conditional variance remains unrestricted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Linear regression assumptions (AT): 1

A

Linearity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Linear regression assumptions (AT): 2

A

Stockastic assumption.
yi and xi are jointly stationary ergotic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Linear regression assumptions (AT): 3

A

Predeterminedness assumption: E[x_iepsilon_i]=0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Linear regression assumptions (AT): 4

A

Sigma_xx=E[xx’] is non singular (asymptotic full rank condition)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Linear regression assumptions (AT): 5

A

x_i epsilon_i is a MDS with variance S=E[xx’epsilon^2], where S is a non-singular variance covariance matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Consistency in OLS with Asymptotic Theory. Which assumptions were needed?

A

1 to 4.
Linearity for the formula, Stockastic assumption (StatErg) to apply ET, Predeterminedness to cancel out the second term, non-singularity for the ergodic theorem.
Thus, with infinite data we would find beta.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Since we don’t have infinite data, we will have uncertainty. Convergence with finite data: CLT

A

With assumptions 1 to 5, OLS is asymptotically normal

17
Q

Since we do not know the asymptotic variance, we need an estimator for it. First, build the estimator for S

18
Q

What is the limiting distribution of b?

19
Q

What is the Robust SE for b?

20
Q

What is the limit of the robust t^* test for scalar hypothesis?

A

We need CMT for avarhat to converge to avar, and slutsky to combine them

21
Q

How do we test linear hypothesis? what is the limiting distribution of this test?

22
Q

What does specification testing tell us

A

If x_iepsilon_i is MDS and x_i includes a constant, then epsilon_i is also MDS, therefore it is serially uncorrelated

23
Q

What is autocovariance? And autocorrelation? How do we test MDS (H0)?

A

Assuming z_i is stationary.
H0: rho_1=—=rho_P=0
Where P is a fixed integer. If we don’t reject H0, then there is no serial correlation

24
Q

What are the estimators for autocovariance and autocorrelation? are they consistent? If MDS, what do they converge to?

25
Propose a statistic to test serial correlation. What happens to it if there is serial correlation?
note: this test statistic doesn't always converge as we want it to
26
Since we can't observe epsilon, how can we apply the BPS?
we can change rho tilde for rho hat if we can change gamma tilde for gamma hat. We substitute ei for epsilon i in the second equation and easily find that it converges to gamma tilde in probability. But we still beed
27
Modified BPS
Where phi^{-1} is a standardizing matrix.