Notational terms Flashcards
(27 cards)
Law of Iterated Expectations LIE
E(X)=E(E(X|Y))
Covariance
Cov(X,Y)=EXY-EXEY
Cov(X,Y)=Cov(Y,X)
Cov(X,a)=0
Variance
Var(X)=Cov(X,X)=E(X^2)-(EX)^2
Var(βj^|X)=
σ^2/(1-Rj^2)Σ(xij-xjbar)^2, j=0,1,…,k
Varβ^=
E(β^-β)(β^-β)’ = σ^2E(X’X)^-1
S^kk=
((X’X)^-1)_(k,k)
standard error of the regression formula:
s^2= (ε^’ ε^)/(n-K) or s^2=(Σ(Xi-X̄)^2)/(n-1)
Approximate distribution of β^:
β^ –> N(β, (σ^2E(xi’xi)^-1)/n) (asymptotically)
Law of Large Numbers LLN
If x1,…,xn is a random sample with finite mean μ and variance σ^2, then the sequence of sample means converges in probability to μ, so X̄–>μ (in probability)
Weak Law of Large Numbers WLLN
X̄–>μ (in probability) when n–>∞
Convergence in probability implies
convergence in distribution
If Xn–>c (in distribution), then
Xn–>c (in probability)
Taylor Expansion:
E(g(wi,θ))≈E(g(wi,θ0))+E(∂g(wi,θ0)/∂θ’)(θ-θ0)
Jensen’s inequality:
E(g(h(y)))≤gE(h(y)) when g is concave
The variance of the OLS estimator is increasing with … and …, decreasing with … and …
increasing with the variance of ε and Rj^2, decreasing with sample size and the sample variance of X
The standard deviation of the sampling distribution of β^ is:
the standard error of β^
An estimator is BLUE (Best Linear Unbiased Estimator) when:
var(β0^)-var(β^)≥0 is a positive semidefinite matrix for every other linear unbiased estimator β
Var(aX+bY+c)=
Var(aX+bY)=a^2VarX+b^2VarY+2abCov(X,Y)
Cov(aX+bY,cZ)=
acCov(X,Z) + bcCov(Y,Z)
Correlation coefficient:
ϱ(X,Y)=Cov(X,Y)/√(VarXVarY) which is in [-1,1], it is the normalized covariance
Var(X,Y)=
VarX+VarY+Cov(X,Y)+Cov(Y,X)
Slutsky’s theorem
Suppose Xn–>c (in probability), where c is a constant and h is a real valued function continuous at c. Then h(Xn)–>h(c) (in probability)
Xn converges in probability to a random variable X, if for any ε>0:
P(|Xn-X|>ε)–>0 as n–>∞
Xn converges in distribution to a random variable X if:
Fn(x)–>F(x) for all x in R such that F is continuous in x, where Fn(x)=P(Xn≤X)