Lecture notes 1 (Two variable regression) Flashcards

1
Q

How do you calculate correlation?

What is a feature about it?

A

COV(X,Y) / sqrt V(X) V(Y)

It is scale free.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

What is correlation?

A

The systematic relationship between two variables?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How do you calculate Covariance of X,Y

Show both methods

A

Sum of (xi- xbar)(yi-ybar) / n-1

E(y-meany)(x-meanx)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

When can we view correlation as a causal relationship? (in context)

A

In clinical trials when you randomly assign different people to different groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What two components is this made of yi = α + βxi + εi ?

A

yi = α + βxi + εi

yi = α + βxi (systematic part)
εi is random

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the 4 Classical Linear Regression assumptions in a model like:

yi = α + βxi + εi

A
  1. E(εi | xi) = E(εi) = 0 ∀i (so the error term is independent of xi).
  2. V (εi | xi) = σ^2 ∀i (error variance is constant (homoscedastic) – points are distributed around the true regression line with a constant spread)
  3. cov(εi, εj |xi) = 0 for i ̸= j (the errors are serially uncorrelated over observations)
  4. εi | xi ∼ N(0, σ^2) ∀i
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How can we interpret something as a causal effect?

A

If E(εi | xi) = 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the CLRM assumption of homoscadascity?

A

That all error terms must have a constant variance for every different explanatory variable value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does this CLRM assumption mean cov(εi, εj |xi) = 0 for i ̸= j

A

It shows the information about the ith person cannot predict the jth person.

Knowing the epsilon of person i will not help in predicting person j

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does the CLRM assumption mean εi | xi ∼ N(0, σ^2) ∀i

A

-The error terms take on a normal distribution
-Purely for mathematical convenience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does the CLRM assumption mean

A

E(εi | xi) = E(εi) = 0 ∀i
for every value of x the mean of epsilon is 0
The error terms are indepedent of x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When estimating a simple regression line for alpha, beta and standard deviation what are the estimates?

A

Alpha = a
Beta = b
Stdv = s^2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How does the OLS estimation work?

A

yi = α + βxi + εi

yi is real value
yhat is predicted value

for OLS you want to minimise difference between real and predicted value by choosing a and b.

-Add sqaured differences of each observation and minimise it.

-Optimal solution is to minimise the residuals sum of sqaures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is e?

A

e is the observed residual difference between yi and yhat

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

When using OLS graphically what are we doing?

A

We are looking at a perfect straight line then looking at how far actual data point is from straight line.

Square all these differences and then minimise them.

This would then find the straight line that is closest to all data points.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How do we estimate the variance in a regression

A

= S^2 Sample variance

1/ n-2 times by the sum of all residuals.

= RSS/ DOF

16
Q

How do we work out OLS estimator for slope coefficient?

A

b = sum of (xi - xbar)(yi-ybar) / sum of (xi-xbar)^2

17
Q

If you have an original model and then create a new model with all X times by 2 what happens to b?

How do you solve for this?

A

The new b is half the initial b

You sub in the relationship for new and old x.
To OLS formula.

Then also for this for intercept.

18
Q

How are the estimates under OLS described?

A

Best - the OLS estimators have the minimum variance, such that V (b|x) ≤ V (b∗|x), where b* is any alternative unbiased estimator.

Linear - the OLS estimators are a linear function of the error term

Unbiasedness- the OLS estimators are such that: E(a|x) = α and E(b|x) = β

Estimators.

19
Q

What must you look out for when interpreting coefficients?

A

WHAT TYPE OF MODEL IS IT?

DOES IT HAVE LOGS?

DO WE NEED TO TAKE e^ for exact if greater than 0.1

20
Q

What is the value at which you can approximate log coefficients?

A

0.1 and less can be approximated.

21
Q

What is the interpretation of coefficient with y = Blog(x)

A

A 1% increase in X causes a Beta/ 100 % increase in Y.

Or if greater than 0.1 use exact e

22
Q

What is the interpretation of coefficient with Y = BX

A

A 1 unit change in X causes a Beta unit increase in Y

23
Q

What is the interpretation of Cocefficient with log(y) = BX

A

When X increases by 1% Y increases by Beta x 100 %

when x is greater than 0.1 then exp(b) -1 x 100 corresponds to % change in Y.

24
Q

How do you interpret the coefficient for a log log model?
log(y) = Log(x)

A

when X increases by 1%, Y increases by Beta%
No need to take exact as already in logs

25
Q

What is another formula for the se(b)

A

all sqrt of s^2 / sum of xi - xbar squared

25
Q

How do you perform a hypothesis test for slope coefficent?

A
  1. H0: B0
    H1: B1
  2. Take significance level and DOF and find CValues
    DOF = (N- nof restrictions on residuals)
  3. We then calculate t = b - B0 / se(b)
  4. then compare to CV and either reject or accept H0.
25
Q

What is forecasting?
What is the issue and how is it solved?

A

Regression models often give one data point prediction but these are unlikely to be correct.

Therefore, we use confidence intervals to predict a range of values in which it will fall.

25
Q

How do you form a confidence interval?

A

Predicted value + / - CV x Standard error.

25
Q
  1. What is the measure of goodness of fit and what equation explains it?
  2. Break down each part of the equation
A

Measure of goodness of fit = R^2 = ESS/TSS or 1 - RSS / TSS

Sum of i=1 to n (Yi - Ybari) = Sum of i=1 to n (Yhat - Ybarhat) + sum of i=1 to n of e^2

Sum of i=1 to n (Yi - Ybari) = Total variation in the data (TSS).

Sum of i=1 to n (Yhat - Ybarhat) = Variation in the predictor values. (ESS)

sum of i=1 to n of e^2 = Variation in the regression residuals. (RSS)

25
Q

1.Is it correct to say a model with a high R^2 is a good model?

  1. What is a better way to measure model adequacy?
A
  1. No, it just means the model explains the total variation well.

Plus R^2 is quite an informal measure so cannot put too much emphasis.

  1. if the error terms are aligned with the CLRM.
26
Q

Go on lectures notes 1 final page and explain all different parts of the stata output file.

A
27
Q

When do you interpret an intercept?

A

-When the explanatory variable is a feasible number