Formulas Flashcards

1
Q

y-y^

A

= y-b1-b2x1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

b1

A

Constant
y bar - b2 x bar

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

MSE

A

E(b- beta)^2 = var(b) + (bias(b))^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

var(b)

A

Check notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Cov

A

Check notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Var(b2)

A

Sigma squared/ sum (x-x bar)^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Var(b1)

A

sum x^2/N (b2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

cov(b1,b2)

A

-xbar (b2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Reject or accept h null when p value<sig level

A

Reject null

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Reject or accept h null if t value greater than table when looking at h> c

A

Reject

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Properties of OLS estimators

A

Unbiased (estimated value is an accurate value of true parameter)
Variance/ se
Efficiency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Se

A

Seb1 =root varb1
Seb2= root varb2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Guass-Markov Theorem

A

Under the assumptions of slr1-5, the estimators b1 and b2 have the smallest variance among all other values
Bet Linear Unbiased Estimator of B1 &B2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Central limit theorem

A

If the slr1-slr5 assumptions hold and N is sufficiently large, then the least squares estimate will have a normal distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Normalised distribution (Z)

A

X-u/ sd
b2-B2 / root var B2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Type 1 error

A

Reject the null hypothesis when it’s right

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Type 2

A

Accept null when it’s wrong

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Forecast variance

A

Compares known values to forecasted values
Var(1 + 1/N + (x0- xbar) squared / sum (xi - xbar) squared )

19
Q

Forecasted variance affected by

A

Depends on var estimator of x
Var of regressor (sum of (x-xbar) squared

20
Q

SSE

A

Explained by other variables

21
Q

SSR

A

Explained by x value

22
Q

MLR1 /SL1

A

Linearity of population model

23
Q

MLR2 /SLR2

A

E(e|x) =0
Strict exogeneity

24
Q

MLR3/ SLR3

A

Var(e|X) = variance
Homoskecasticity, all error terms have same variance

25
Q

Cov(e|x) =0

A

Autocorrelation / conditionally uncorrelated errors
No covariance between variables or error terms

26
Q

MLR5

A

No perfect multicollinearity not affected by each other

27
Q

SLR5

A

Has to have more than 1 variable

28
Q

SLR 6/MLR6

A

Error normality (normalised)

29
Q

SSR

A

Sum (y-b1-b2x2-b3x3)^2

30
Q

SSE

A

Sum of e^ /N-K

31
Q

Steps to hypothesis testing

A

Defin null &alternative hypothesis
Specify test stat and it’s distribution under null hyp
Decide significance, determine rejection region
State conclusion

32
Q

Pros & cons of R^2

A

Unit free, concise, bounded measure
Adding a regression won’t reduce R^2 leads to over fitting, model must have an intercept

33
Q

Profit max firm

A

Marginal benefit= mc

34
Q

Non sample information

A

Improves estimates information

35
Q

F test

A

Assesses how big loss of fit is, change in SSE

36
Q

What happens if SSEr> SSEu

A

If restricted is bigger then the variable does affect the model

37
Q

T test

A

Differentiated number - cost / sd differentated number

38
Q

Prove bias

A

E(b2|X) -B2 = B3 gamma if it’s biased, unbiased it’s 0
Violated MLR2

39
Q

Reset test

A

H0 no omitted variables
H1 omitted variables exist or wrong functional form gamma y squared + gamma y cubed

40
Q

Correlation

A

When related the variance and correlation goes bigger, you don’t want them to be related or to use bad data, having a little correlation isn’t an issue
If you take on a bad variable, your correlation still increases but variance would fall compared to before

41
Q

GLS

A

Hetero turn homo by doing 1/ root x

42
Q

What is the issue with gls model

A

Biased se
Hetero
Inefficient

43
Q

Properties of GLS

A

Efficiency is greater
Makes homoskedastic
Reduces variance