2. Goodness of fit Flashcards

1
Q

What are the three measures of variation?

A
  1. The total sum of squares (SST)
  2. Explained sum of squares (SSE)
  3. Residual sum of squares (SSR)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

What does the total sum of squares measure (SST)?

A

The total variation in the dependent variable. (How much variation you have in the sample when you look at the dependent variable y) it measures how spread out the yi are in the sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does the explained sum of squares show (SSE)?

A

Represents the variation explained by the regression. SSE measures the sample variation in the ŷi

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the residual sum of squares show (SSR)?

A

The variation not explained by the regression. SSR measures the sample variation in the u^i

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does R^2 show?

A

R^2 measures the fraction of the total variation that is explained by the regression

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is important to remember about R^2 as a measure of goodness of fit?

A

R^2 does not explain any internal validity just fitting the data, there is no sense of causality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What can a low R^2 be interpreted as?

A

Means there is a lot of stuff in the variation that is not explained by the model (the model is not a great fit)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the restrictions on how y and x can relate to the original explained and explanatory variables of interest?

A

As long as the model remains linear in parameters, there are no restrictions (logs for example). The mechanics of estimation and inference do not depend on how y and x are defined

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a log-level model?

A

The natural logarithm of the dependent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does taking the log-level model change your model?

A

Everything remains the same in terms of mechanics but your interpretation changes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is linearity in and why does it mean we can take the log-level model?

A

Linearity is not in the variable, linearity is in the parameter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does B1 in a level-level model measure?

A

B1 measures the unit change in y for a 1 unit change in x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does B1 in a Log-level model measure?

A

B1 measures the percentage change in y for an absolute change in x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does B1 in a Level-log model measure?

A

B1 measures the absolute change in y for a percentage change in x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does B1 in a log-log model measure?

A

B1 measures the elasticity of y with respect to x, that is the percentage change in y for a given percentage change in x. A 1% increase in x is associated with a B1% change in y

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What type of variables are Bo hat and B1 hat and what does this mean?

A

Random variables meaning that the outcome depends on the sample you use, it also means they have a distribution which means they also have a variance and expectation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a fitted value?

A

By definition, each fitted value of yi is on the OLS regression line. The OLS residual associated with observation i, u^i, is the difference between yi and its fitted value, as given in equation (2.21). If u^i is positive, the line underpredicts yi; if u i is negative, the line overpredicts yi.

17
Q

What is the point of the OLS estimates?

A

They are chosen to make the residuals add up to 0 for any data set (see slide 32 W1)

18
Q

What is the gap between the observed value and the fitted values line called?

A

The disturbance

19
Q

Why is a linear regression model known as linear if we can have non-linearities as in the log case?

A

The key is that this equation is linear in the parameters ß0 and ß1. There are no restrictions on how y and x relate to the original explained and explanatory variables of interest.

20
Q

What are the statistical properties of a simple linear regression model?

A
  1. Linear in Parameters
  2. Random sampling
  3. Sample variation in the explanatory variable
  4. Zero conditional mean
  5. Homoskedasticity
21
Q

What conditions need to be satisfied for a regression to be unbiased?

A
  1. Linear in Parameters
  2. Random sampling
  3. Sample variation in the explanatory variable
  4. Zero conditional mean
22
Q

In your own words, what is meant by unbiased?

A

The estimated coefficients may be smaller or larger depending on the sample that is the result of a random draw but on average (if repeated) they will be equal to the values that characterise the true relationship between y and x in the population. the sampling distribution of ß^1 is centered about ß1

23
Q

What will happen if you repeat the random sampling an infinite amount of times?

A

You will get the true values

24
Q

How is sampling variability measured?

A

By the estimators variances

25
Q

What is homoskedasticity?

A

Assumes constant variance of the error term u

26
Q

What does u have the same variance as?

A

The error u has the same variance given any value of the explanatory variable.

27
Q

Where is heteroskedasticity present?

A

When the Var(u|x) depends on x

28
Q

If there is large variability in the unobserved factors, how will the coefficients be effected?

A

Large variance in the unobserved results in a high sampling varaibility in the estimated coefficients

29
Q

What is the difference between errors and residuals?

A

Errors show up in the equation containing the population parameters, on the other hand, the residuals show up in the estimated equation. The errors are never observed while the residuals are computed from the data

30
Q

What are standard errors and what do they measure?

A

The estimated standard deviations of the regression coefficients are called standard errors. They measure how precisely the regression coefficients are estimated (sample variation in beta)

31
Q

What are standard errors simply?

A

The standard error of a coefficient, SE (B^), tells us how much sampling variation there is if we were to re-sample and re-estimate B

32
Q

What is the difference between the standard deviation and the standard error?

A

The standard deviation quantifies the variation within a set of measuremetnts whereas the standard error quantifies the variation in the means for multiple sets of measurements

33
Q

Suppose you estimate using our sample that the slope coefficient of wage and education is 1, is this large enough to conclude that there is a relationship between wages and education or is 1 too close to 0 and instead is likely to be caused by sampling variation/ randomness?

A

It depends on the standard error, if the standard error is 0.2, then we can say it is 5 standard errors away from 0 which is pretty good but if it is instead 2, this is only 0.5 SE away from 0 which is not suggesting a weak relationship

34
Q

How do we know for sure whether the number of standard errors away from zero is large or small?

A

The critical value! It tells us the cut-off number of standard errors needed fro a sample coefficient to be statistically significant

35
Q

Why are standard error viewed as random variables?

A

Because they change according to the random variance B

36
Q

Why are standard errors important?

A

The standard error of any estimate gives us an idea of how precise the estimator is

37
Q

What is a binary variable/ dummy variable?

A

x can only take two values, 0 and 1. Used to put each unit of the population in one of two groups

38
Q

What do dummy variables allow for?

A

The mean value of y to vary depending on the state of x

39
Q

What is the causal/ treatment effect?

A

The difference between the two potential outcomes, one in which they are a treatment group and another which is a control group.

40
Q

What is the average treatment effect (ATE)?

A

The ATE is simply the average of the treatment effects across the entire population.

41
Q

What is the only way that the assumption that x is independent of u can be satisified and why?

A

This assumption can be guaranteed only under random assignment, whereby units are assigned to the treatment and control groups using a randomisation mechanism that ignores any features of the individual units.