Formulas Flashcards

(43 cards)

1
Q

y-y^

A

= y-b1-b2x1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

b1

A

Constant
y bar - b2 x bar

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

MSE

A

E(b- beta)^2 = var(b) + (bias(b))^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

var(b)

A

Check notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Cov

A

Check notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Var(b2)

A

Sigma squared/ sum (x-x bar)^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Var(b1)

A

sum x^2/N (b2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

cov(b1,b2)

A

-xbar (b2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Reject or accept h null when p value<sig level

A

Reject null

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Reject or accept h null if t value greater than table when looking at h> c

A

Reject

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Properties of OLS estimators

A

Unbiased (estimated value is an accurate value of true parameter)
Variance/ se
Efficiency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Se

A

Seb1 =root varb1
Seb2= root varb2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Guass-Markov Theorem

A

Under the assumptions of slr1-5, the estimators b1 and b2 have the smallest variance among all other values
Bet Linear Unbiased Estimator of B1 &B2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Central limit theorem

A

If the slr1-slr5 assumptions hold and N is sufficiently large, then the least squares estimate will have a normal distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Normalised distribution (Z)

A

X-u/ sd
b2-B2 / root var B2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Type 1 error

A

Reject the null hypothesis when it’s right

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Type 2

A

Accept null when it’s wrong

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Forecast variance

A

Compares known values to forecasted values
Var(1 + 1/N + (x0- xbar) squared / sum (xi - xbar) squared )

19
Q

Forecasted variance affected by

A

Depends on var estimator of x
Var of regressor (sum of (x-xbar) squared

20
Q

SSE

A

Explained by other variables

21
Q

SSR

A

Explained by x value

22
Q

MLR1 /SL1

A

Linearity of population model

23
Q

MLR2 /SLR2

A

E(e|x) =0
Strict exogeneity

24
Q

MLR3/ SLR3

A

Var(e|X) = variance
Homoskecasticity, all error terms have same variance

25
Cov(e|x) =0
Autocorrelation / conditionally uncorrelated errors No covariance between variables or error terms
26
MLR5
No perfect multicollinearity not affected by each other
27
SLR5
Has to have more than 1 variable
28
SLR 6/MLR6
Error normality (normalised)
29
SSR
Sum (y-b1-b2x2-b3x3)^2
30
SSE
Sum of e^ /N-K
31
Steps to hypothesis testing
Defin null &alternative hypothesis Specify test stat and it’s distribution under null hyp Decide significance, determine rejection region State conclusion
32
Pros & cons of R^2
Unit free, concise, bounded measure Adding a regression won’t reduce R^2 leads to over fitting, model must have an intercept
33
Profit max firm
Marginal benefit= mc
34
Non sample information
Improves estimates information
35
F test
Assesses how big loss of fit is, change in SSE
36
What happens if SSEr> SSEu
If restricted is bigger then the variable does affect the model
37
T test
Differentiated number - cost / sd differentated number
38
Prove bias
E(b2|X) -B2 = B3 gamma if it’s biased, unbiased it’s 0 Violated MLR2
39
Reset test
H0 no omitted variables H1 omitted variables exist or wrong functional form gamma y squared + gamma y cubed
40
Correlation
When related the variance and correlation goes bigger, you don’t want them to be related or to use bad data, having a little correlation isn’t an issue If you take on a bad variable, your correlation still increases but variance would fall compared to before
41
GLS
Hetero turn homo by doing 1/ root x
42
What is the issue with gls model
Biased se Hetero Inefficient
43
Properties of GLS
Efficiency is greater Makes homoskedastic Reduces variance