Notational terms Flashcards

(27 cards)

1
Q

Law of Iterated Expectations LIE

A

E(X)=E(E(X|Y))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Covariance

A

Cov(X,Y)=EXY-EXEY
Cov(X,Y)=Cov(Y,X)
Cov(X,a)=0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Variance

A

Var(X)=Cov(X,X)=E(X^2)-(EX)^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Var(βj^|X)=

A

σ^2/(1-Rj^2)Σ(xij-xjbar)^2, j=0,1,…,k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Varβ^=

A

E(β^-β)(β^-β)’ = σ^2E(X’X)^-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

S^kk=

A

((X’X)^-1)_(k,k)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

standard error of the regression formula:

A

s^2= (ε^’ ε^)/(n-K) or s^2=(Σ(Xi-X̄)^2)/(n-1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Approximate distribution of β^:

A

β^ –> N(β, (σ^2E(xi’xi)^-1)/n) (asymptotically)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Law of Large Numbers LLN

A

If x1,…,xn is a random sample with finite mean μ and variance σ^2, then the sequence of sample means converges in probability to μ, so X̄–>μ (in probability)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Weak Law of Large Numbers WLLN

A

X̄–>μ (in probability) when n–>∞

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Convergence in probability implies

A

convergence in distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

If Xn–>c (in distribution), then

A

Xn–>c (in probability)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Taylor Expansion:

A

E(g(wi,θ))≈E(g(wi,θ0))+E(∂g(wi,θ0)/∂θ’)(θ-θ0)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Jensen’s inequality:

A

E(g(h(y)))≤gE(h(y)) when g is concave

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The variance of the OLS estimator is increasing with … and …, decreasing with … and …

A

increasing with the variance of ε and Rj^2, decreasing with sample size and the sample variance of X

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The standard deviation of the sampling distribution of β^ is:

A

the standard error of β^

17
Q

An estimator is BLUE (Best Linear Unbiased Estimator) when:

A

var(β0^)-var(β^)≥0 is a positive semidefinite matrix for every other linear unbiased estimator β

18
Q

Var(aX+bY+c)=

A

Var(aX+bY)=a^2VarX+b^2VarY+2abCov(X,Y)

19
Q

Cov(aX+bY,cZ)=

A

acCov(X,Z) + bcCov(Y,Z)

20
Q

Correlation coefficient:

A

ϱ(X,Y)=Cov(X,Y)/√(VarXVarY) which is in [-1,1], it is the normalized covariance

21
Q

Var(X,Y)=

A

VarX+VarY+Cov(X,Y)+Cov(Y,X)

22
Q

Slutsky’s theorem

A

Suppose Xn–>c (in probability), where c is a constant and h is a real valued function continuous at c. Then h(Xn)–>h(c) (in probability)

23
Q

Xn converges in probability to a random variable X, if for any ε>0:

A

P(|Xn-X|>ε)–>0 as n–>∞

24
Q

Xn converges in distribution to a random variable X if:

A

Fn(x)–>F(x) for all x in R such that F is continuous in x, where Fn(x)=P(Xn≤X)

25
Cramér Convergence Theorem:
1: Suppose Xn-->X (in distribution), Yn-->c (in probability), then: Xn+Yn-->X+c, XnYn-->Xc, Xn/Yn-->X/c 2: If Xn-->X (in probability), then Xn-->X (in distribution), the converse is true is X is a constant 3: If Xn-Yn-->0 (in probability) and Xn-->X (in distribution), then Yn-->X (in distribution)
26
Continuous Mapping Theorem (CMT)
Suppose that Xn-->X (in distribution) and let h be a continuous function on a set χ with P(X∈χ)=1. Then h(Xn)-->h(X) (in distribution)
27
Central Limit Theorem (CLT)
√n(X̄-μ)=(1/√n)Σ(Xi-μ)-->N(0,σ^2)