Hierarchical Linear Regression and Model Comparison Flashcards

1
Q

What method of model comparison did we focus on in this unit?

A

The difference between predictor variables of interest and other control/compounding variables. That’s called hierarchical regression. In the end the goal is the same: to adjust our model to best explain our observations based on theory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What type of model is it when all predictor variables are treated equally and there is just one model?

A

A simultaneous model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a heirarchical regression essentially?

A

A series of related multiple regressions that are being compared by seeing what changes when you add to previous models

AKA model comparison

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What information would you garner from looking at Model Summary table for a regression such a Hierarchical regression?

A

You would be interested in the
R squared
and Adjusted R squared

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What information would you garner from looking at ANOVA table for a regression such a Hierarchical regression?

A

This table tells us whether our model has predictive utility. More specifically, whether the predictors (X) collectively account for a satistically significant proportion of variance in the Y variable.

If the p value in the ANOVA table is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What information would you garner from looking at Coefficients table for a regression such a Hierarchical regression?

A

In the coefficients table,

The unstandardised coefficient (B) details the role each indivdiual predictor plays in the regression model. This one indicates the predicted change in the DV associated with 1 unit change AFTER controlling for effects of all other predictors in the model.

So, say we see the unstandardised coefficient for negative affect is .5. and there is another predictor, anxiety, in the model. .5 is saying after controlling for for the EFFECTS of the anxiety, a 1 unit increase in negative affect will bring us to a predicted .5 increase in negative affect.

The standardised coefficient/beta (b) details the predicted change in SD’s in the DV associated with a 1 SD change in the relevant predictor after controlling for the effects of the remaining predictors in the model.

Finally, the t statistics and sig levels tell us whether the predictors account for a SIGNIFICANT proportion of UNIQUE variance in the DV - so unique variance is variance that cannot be explained by OTHER predictors in the model. So, say you see sig of .5 next to negative affect, you have anothe predictor (Caffeine) and DV is sleep quality. That says negative affect cannot account for variance in sleep quality BEYOND that which is already explained by caffeine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What information would you garner from looking at Part and Partial Correlations table for a regression such a Hierarchical regression?

A

The Part is the semi-partial correlation. This is a really important unit as looking at it tells us that, if we were to square that unit, say part = .182 for certainty, THEN 3.3% of variance in sleep quality can be uniquely attributed by certainty.

So r squared would decrease by .182 if certainty was removed from the model.

So this is in the coefficients part

The PARTIAL correlation is just the partial correlation between the predictor and DV.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does the Adjusted R squared figure represent in regression when looking at in ANOVA table?

A

Provides a more accurate estimate of the true extent of the relationship between the predictor variables and the DV. It offers a better estimate of the population r squared.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does R squared and adjusted R squared differ?

A

Often referred to as SHRINKAGE. Adjusted R squared is more conservative due to the risk that regression has for overfitting the data..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is adjusted R squared essentially saying regarding replication with the sample the population was drawn from?

A

If we were to replicate this study many times with samples drawn from the same population we would, on average, account fo X of the variance in DV with predictors X and X.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is adjusted R squared essentially saying regarding replication with the sample the population was drawn from?

A

If we were to replicate this study many times with samples drawn from the same population we would, on average, account fo X of the variance in DV with predictors X and X.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

If in a HR, the coefficient for a covariate becomes smaller or non significant, what does this show?

A

That the effect of the covariate, the unique variance that it accounts for regarding the DV, goes away when controlling for it.
So comparing the regression coefficients for covariates tells us how much smaller the relations are after accounting for potential confounding factors/covariates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

If in a HR, the coefficient for a covariate becomes smaller or non significant, what does this show?

A

That the effect of the covariate, the unique variance that it accounts for regarding the DV, goes away when controlling for it.
So comparing the regression coefficients for covariates tells us how much smaller the relations are after accounting for potential confounding factors/covariates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

So you have a HR ANOVA output in front of you and you want to see how each predictor variable is influencing the DV as it is added to the model. What would you look at?

A

R2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why would you look at R2 change?

A

To see the exact amount of variance change in the model on the DV when you add the next covariate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How would we work out which model is better - which information would we look at?

A

By looking at r2 change and f change (on the Model Summary)

First look at r2 change for each model - you would look at the first model to see what the change is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

If the models sum of squares (as seen in the ANOVA summary table) is much greater than the residual or error sum of squares, what does this mean?

A

It is a good model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

If the model Total sum of squares, as seen in the ANOVA summary table, is greater than the residual, this indicates it is a good model. But what would we need to see to ensure this outcome is better than chance?

A

A significant P value! So less than

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

In terms of the F change value for the null model and the F statistic as seen on the ANOVA model 1, would we expect these F statistics to be the same?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

To measure magnitude in significant variance across the models, what value do we look at?

A

R2 Change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

For VIF, what do you want to see for multicollinearity not being an issue?

A

> 10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

For Tolerance, what do you want to see for multicollienarity not being an issue?

A

> 1.0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

For Tolerance, what do you want to see for multicollienarity not being an issue?

A

> 1.0

24
Q

When looking at the residuals descriptives for the model, do we look at the standardised ones or the normal?

A

Standardised

25
Q

What do the residuals show us in the descriptives?

A

The deviations of the ACTUAL observed data points from what we predicted they would be with our model

Normally distributed standardised residuals

26
Q

Is heirarchical regression a step wise regression?

A

No because each step in the process is a NEW model, rather than a step. HR means series of many models organised hierarchically. In HR you are including what was in previous model before. Therefore truly heirarchical and nested.

27
Q

What do the df tell us in HR?

A

accurate for the number of tests we did to get that final model

28
Q

Looking at each variable coefficient in the models and how that changes tells us how much variance in the DV is accounted for by that predictor variable - true or false

A

True

29
Q

True or false:

each ‘step’ or model progressively includes more predictors as well as previous predictors in ____

A

TRUE: a Heirarchical Regression

30
Q

Model comparisons, including looking at ____ allow for examination of additional variance accounted for by added predictors…?

A

R2

31
Q

Why would we compare how coefficients change?

A

To see how a particular predictor’s effect is influenced by adding additional predictors - which are typically covariates

32
Q

True or false: all models are an approximation of reality/wrong

A

True. A model won’t even be equal to reality.

33
Q

Why do we not want to overfit things?

A

If we become obsessed with goodness of fit or finding model perfectly fits our sample, we run risk of not being able to geenralise to population

34
Q

What is advantaegous about parsimonious models?

A

They are simple and therefore have great explanatory power. Explain data with minimal parameters/predictors

35
Q

What are nested models?

A

Where one includes ALL variables of another model

To be nested all of the orginal variables need to be in second model.

36
Q

Can you run a HR in a non nested model?

A

NO

37
Q

How would you compare non nested models then?

A

In a QUADRATIC model and PIECEWISE model

38
Q

What do both quadratic and piecewise models (non-nested models) include?

A

two parameters: Linear and quadratic or piece one and piece two

39
Q

What is a quadratic association?

A

Non linear, which means you would have two parameters

40
Q

What is model specification?

A

Picking the form plus variables (parameters of a model) to use.

41
Q

What is model specification?

A

Picking the form plus variables (parameters of a model) to use.

42
Q

True or false: parasomny has more explanatory power

A

TRUE

43
Q

In linear models, what is the negative log likelihood often known as?

A

The sum of squared deviations

44
Q

The sum of squared deviations is also known as the ______ in linear models

A

negative log likelihood

45
Q

True or false: most models work my maximising the log likelihood (LL)

A

true

46
Q

How is significance testing different to finding out how GOOD the model is compared to others?

A

Sig testing is finding out if your predictor variable is associated witht he outcome variable, more than by chance

As in model comparison there is more than one model to compare it to, there is NO absolute threshold for our metrics. Instead, commpare each metric or fit index to those of another model. So need more than one for comparison

47
Q

What does a high log likelihood mean?

A

Better goodness of fit of that model. Log liklihood = sum of squared deviations.

48
Q

If you see “How likely is it that we would observe a score of 6, if the mean is 5?” where would we find this statement in regards to when considering model comparisons?

A

Log Likelihood

49
Q

Why is a piecewise model always going to have a higher LL than a simple linear model?

A

Because the extra line means there are more parameters, so will be a closer fit to the data.

50
Q

Will a piecewise model always be a better model than a simple linear one because higher LL?

A

No. More complex models are not always generalisable. For example may have sall sample size and not useful to population of interest

51
Q

What is the thing called that accounts for models that are not parsimonious/have added complexity?

A

Fit indices.
Two common ones:

The Akaike Info Criterion (AIC) and the Bayesian (BIC)

Both equations use LL and take into account numbe rof parameters - always represented by letter K.

52
Q

What is the difference between the fit indices - AIC and BIC that account for complexity in non-nested models?

A

While both use LL and take into account number of parameters (K) , the BIC is ALSO taking into account the sample size

53
Q

Is BIC better to use over AIC?

A

Not neccessarily, while BIC is taking into account the sample size, it’s best to look at both and hope they agree. Both scores should be lower for the model that is closer to the truth

54
Q

True or false: The AIC and BIC are giving the relative quality of a model - It is just RELATIVE to another model. They mean NOTHING on their own

A

True

55
Q

What do you use an AIC or BIC for?

A

To select the best model - and best means both the HIGHEST likelihood and fewest number of parameters (which means simple and more parsimonious model)

56
Q

Would you want a high or low AIC/BIC score?

A

Low. Low is better - less complicated

57
Q

What is the theoretic reason to get AIC and BIC for non-nested models?

A

Because we are trying to compare the models and comparing logs across models doesn’t work for linear vs. piecewise as piecewise always has higher log (more parameters). So, to account for complexity in piecewise (closer fit to data) we need a metric for how many parameters are used.

AIC - Log + # of K
BIC - Log + # of K + Sample size
(relative to another model)

The AIC BIC allows us to give RELATIVE comparisons (no absolute values on their own)