Flashcards in L1 Deck (21)

Loading flashcards...

1

## Slope of regression line = ?

### Expected effect on Y of a unit change in X

2

## How does an OLS estimator work?

### It minimises the avg square of differences between the actual Yi values and the predicted ones

3

## What is the interpretation of beta1(hat)?

### For each extra unit of X, Y on avg. is beta1(hat) higher/lower

4

## What is the interpretation of beta0(hat)?

### It is the intercept (ie. Y when X=0)

5

## Give an example when an interpretation of Beta0(hat) would not make sense?

### If looking at test scores to student-teacher ratio, by extrapolating outside the data the line predicts a very high score, even though having 0-students to a teacher would not actually give a score!

6

## What does R^2 measure?

### The fraction of the variance of Y that is explained by X (ranges from 0 (none) to 1 (complete explanation))

7

## What does it mean if R^2 is 0 or 1 for explained sum of squares?

###
If 0, means ESS=0

If 1, means TSS=ESS

8

## When does R^2=r^2?

### When there is only one X variable

9

## What is r^2?

### The square of the correlation coefficient between X and Y

10

## What does the standard error of the regression measure (SER)?

### The magnitude of a typical regression residual in the units of Y

11

## What does the root mean squared error measure?

### Same as the SER; in large samples the measures are very close

12

## Briefly, what are the least squares assumptions?

###
1) Error and X variable(s) are independent: E(u|X=x)=0

2) Observations are i.i.d. (Xi and Yi)

3) Large outliers in X and/or Y are rare

(not the same as the OLS assumptions!)

13

## When do we often encounter non i.i.d. sampling?

### TS data and Panel data

14

## What is the sampling uncertainty of an OLS estimator?

### When a different sample of data might lead to different beta1 estimate

15

## How is the variance of Beta1(hat) related to the sample size?

### Inversely proportional; as sample size increases, variance decreases

16

## See: consistency, approximations and CLT

### now

17

## How does the variance of X affect the variance of Beta1(hat)?

###
Larger variance of X, smaller variance of beta1(hat) (see notes)

if there is more variance in X, there is more information in the data therefore the regression is plotted with more confidence therefore less variance in coefficient

18

## State the CLT?

### Suppose you have a set of i.i.d. observations with expected value 0 and variance=(sigma)^2, then when n is LARGE, 1/n(sum of obs.) is approximately distributed by N(0, Variance/n)

19

## Explain LSA 1: Error and X variable(s) are independent: E(u|X=x)=0?

###
For any given value of X, the mean of the error is zero!

Consider an ideal randomised controlled experiment: X is randomly assigned to people by a computer tf since X is randomly assigned, all other characteristics (present in u) are randomised so u and X are independent!

If an IRCE isn't in action must consider if LSA 1 actually holds!

20

## Explain LSA 2: Observations are i.i.d. (Xi and Yi)? (ie. explain each i?)

###
Arises automatically if the entity is sampled by simple random sampling!

entities selected from same pop. so (Xi,Yi) are identically distributed for all i, and since they are selected at random the values (Xi,Yi) are also independently distributed tf i.i.d.

21