Flashcards in L2 Deck (20)

Loading flashcards...

1

## What assumptions are used to show that beta(hat1) is approximately distributed by a normal distribution?

### 3 LSAs and the assumption that n is large!

2

## Diagramatically, what is the p-value?

### The probability in tails outside of the +-t(actual) of a normal distribution. If p-value is less than 5%, then can reject at the 5% significance level

3

## n is large, n=(maybe)...?

### 50+

4

## A 95% confidence level is...? (2, just need one of them!)

###
1) the set of points that cannot be rejected at the 5% SL.

2) a set-valued function of the data that contains the TRUE PARAMETER VALUE 95% of the time in repeated samples

5

## What regression error does Stata report in parentheses?

### Room MSE

6

## In Stata, what is: a) _cons, b) p>|t|, and c) 't' column?

###
a) constant (ie. beta0)

b) p-value

c) t-statistic for β(i)=0

7

## How do we interpret the coefficient of a binary variable? Equation showing this?

###
If β=1, it is the effect on Y that the variable has (ie. the effect of 'being a man' on 'wage rate')

Can be described by the equation:

β(1) = E(Yi|Xi=1) - E(Yi|Xi=0)

tf is the population difference in group means

8

## What are homoskedastic errors?

###
The errors when the error term is uncorrelated with the X variable(s)

ie. var(u|X=x) = σ^2 (ie. constant) (if not then HTSK)

9

## Check

### Check that I can tell from data is errors are homo or hetero

10

## Advantage and disadvantage of the homo-only SE formula?

###
Pro: simple

Con: only works for homo errors

11

## What are Robust SEs?

### Usual standard errors, valid whether homo or heteroskedastic errors

12

## Why is better to always use robust SEs?

### Since the two different formulas coincide when n is large anyway!

13

## What are the extended LSAs?

###
Original 3 plus two more:

4) u is homoskedastic

5) u is distributed N(0,σ^2)

14

## Why do we need to be careful with LSAs 4 and 5?

### They are more restrictive than LSAs 1-3, tf apply to less cases (but give opportunity to prove strong results)

15

## What is the Gauss-Markov Theorem?

### Under LSAs 1-4 (ie. originals plus homoskedasticity), β(hat1) has the smallest variance among all linear estimators (ie. efficient) (don't think I need proof of this, page 36 if I do need it)

16

## Efficiency of OLS under all 5 LSAs? Explain.

###
β(hat1) has the smallest variance of all consistent estimators (ie. linear or non-linear combinations of Yi) as n->infinity

This means that OLS is the best you can do among consistent estimators, and since non-consistent ones aren't really an option, OLS is really the best you can do!

17

## 3 problems with OLS estimation? (2 efficiency assumption points + one other)

###
1) GM Theorem isn't that compelling (result is only for linear estimators, homo often doesn't hold)

2) 5LSA efficiency requires homo and normal errors, which is often not plausible!

3) OLS is more sensitive to outliers than some other estimators

18

## Possible estimator to use if there are large outliers in OLS?

### Least Absolute Deviations (LAD) estimator (see notes)

19

## What can we conclude if all 5 assumptions hold?

###
1) Beta(hat0) and beta(hat1) are normally distributed

2) The t-statistic has a student t-distribution with n-2 degrees of freedom

20