Week 4: Multiple Regression Flashcards

1
Q

What is the decision tree for multiple regression? - (4)

A
  • Continous
  • Two or more predictors that are continous
  • Multiple regression
  • Meets assumptions of parametric tests
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

simple linear regression
the outcome variable Y is

A

predicted using the equation of a straight line

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Multiple regression still uses the same basic equation of …. but the model is still complex

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Multiple regression is the same as simple linear regression expect for - (2)

A

every extra predictor you include, you have to add a coefficient;

so, each predictor variable has its own coefficient, and the outcome variable is predicted from a combination of all the variables multiplied by their respective coefficients plus a residual term

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Multiple regression equation

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In multiple regression equation, list all the terms - (5)

A
  • Y is the outcome variable,
  • b1 is the coefficient of the first predictor (X1),
  • b2 is the coefficient of the second predictor (X2),
  • bn is the coefficient of the nth predictor (Xn),
  • εi is the difference between the predicted and the observed value of Y for the ith participant.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Multiple regression uses the same principle as linear regression in a way that

A

we seek to find the linear combination of predictors that correlate maximally with the outcome variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Regression is a way of predicting things that you have not measured by predicting

A

an outcome variable from one or more predictor variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Regression can be used to produce a

A

linear model of the relationship between 2 variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Record company interested in creating model of predicting recording sales from advertising budget and plays on radio per week (airplay)

  • Example of it’s MR plotted on + number of vars measured, what vertical axis shows, horizontal and third axis shows - (4)
A

It is a three dimensional scatter plots, which means there are three axes measuring the value of the three variables.

The vertical axis measures the outcome, which in this case is the number of album sales.

The horizontal axis measures how often the album is played on the radio per week.

The third axis, which can can think of being directed into the page measures the advertising budget.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Can’t plot a 3D plot of MR as shown here

A

for more than 2 predictor (X) variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The overlap in the diagram is the shared variance, which we call the

A

covariance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

covariance is also referred to as the variance

A

shared between the predictor and outcome variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is shown in E?

A

The variance in Album Sales not shared by the predictors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is shown in D?

A

Unique variance shared between Ad Budget and Plays

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is shown in C?

A

The variance in Album Sales shared by Ad Budget and Plays

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is shown in B?

A

Unique variance shared between Plays and Album Sales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is shown in A?

A

Unique variance shared between Ad Budget and Album Sales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

If you got two prediictors thart overlap and correlate a lot then it is a .. model

A

bad model can’t uniquely explain the outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

In Hierarchical regression, we are seeing whether

A

one model explains significantly more variance than the other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

In hierarchical regression predictors are selected based on

A

past work and the experimenter
decides in which order to enter the predictors into the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

As a general rule for hierarchical regression, - (3)

A

known predictors (from other research) should be entered into the model first in order of their importance in predicting the outcome.

After known predictors have been entered, the
experimenter can add any new predictors into the model.

New predictors can be entered either all in one go, in a stepwise manner, or hierarchically (such that the new predictor
suspected to be the most important is entered first).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Example of hierarchical regression in terms of album sales - (2)

A

The first model allows all the shared variance between Ad budget and Album sales to be accounted for.

The second model then only has the option to explain more variance by the unique contribution from the added predictor Plays on the radio.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is forced entry MR?

A

method in which all predictors are forced
into the model simultaneously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Like HR, forced entry MR relies on
good theoretical reasons for including the chosen predictors,
26
Different from HR, forced entry MR
makes no decision about the order in which variables are entered.
27
Some researchers believe that about forced entry MR that
this method is the only appropriate method for theory testing because stepwise techniques are influenced by random variation in the data and so rarely give replicable results if the model is retested.
28
How to do forced entry MR in SPSS? - (4)
Analyse --> Linear --> Regression Put outcome in DV and IVs (predictors, x) in IV box Can select a range of statistics in statistics box and press okay to check colinearity assumption Can also click plots to check assumptions of homoscedasticity and lineartiy
29
Why select colinearity diagnostics in statistics box for regression? - (2)
This option is for obtaining collinearity statistics such as the VIF, tolerance, Checking assumption of no multicolinearity
30
Multicollinearity exists when there is a
strong correlation between two or more predictors in a regression model.
31
Multicollinearity poses a problem only for multiple regression because
simple regression requires only one predictor.
32
Perfect collinearity exists when at least
e.g., two predictors are perfectly correlated , have a correlation coefficient of 1
33
If there is perfect collinearity between predictors it becomes impossible
to obtain unique estimates of the regression coefficients because there are an infinite number of combinations of coefficients that would work equally well.
34
Good news is perfect colinearity is rare in
real-life data
35
If two predictors are perfectly correlated then the values of b for each variable are
interchangable
36
The bad news is that less than perfect collinearity is virtually
unavoidable
37
As colinearity increases, there are 3 problems that arise - (3)
* Untrustory bs * Limit size of R * Importance of predictors
38
As colinearity increases, there are 3 problems that arise Untrustworthy bs - (3)
As collinearity increases so do the standard errors of the b coefficients. big standard errors for b coefficients means that these bs are more variable across samples b coefficient in our sample is less likely to represent the population.
39
As colinearity increases, there are 3 problems that arise Limit size of R - (2)
two predictors are highly correlated - the second predictor accounts for same variance accounted for first variable --> the second predictor accounts for a very little unique variance If two predictors completely uncorrelated then second predictor likely to account for different varinace in outcome then acocunted for first predictor
40
As colinearity increases, there are 3 problems that arise importance of predictors - (3)
Multicollinearity between predictors makes it difficult to assess the individual importance of a predictor. If the predictors are highly correlated, and each accounts for similar variance in the outcome, then how can we know which of the two variables is important? Quite simply we can’t tell which variable is important – the model could include either one, interchangeably.
41
One way of identifying multicollinearity is to scan a
a correlation matrix of all of the predictor variables and see if any correlate very highly (by very highly I mean correlations of above .80 or .90)
42
SPSS produces colinearity diagnoistics which is - (2)
variance inflation factor (VIF) and tolerance
43
The VIF indicates whether a
predictor has a strong linear relationship with the other predictor(s).
44
If VIF statistic is above 10 there is a good reason to worry about
potential problem of multicolinearity
45
If VIF statistic above 10 or approaching 10 then what you would want to do is have a - (2)
look at your variables to see if you need to include all variables whether all need to go in model if high correlation between 2 predictors (measuring same thing) then decide whether its important to include both vars or take one out and simplify regression model
46
Related to the VIF is the tolerance statistic, which is its
reciporal (1/VIF) = inverse of VIF
47
In tolerance, value below 0.2 shows
issue with multicolinerity
48
In Plots in SPSS, you put - (2)
ZRESID on Y and ZPRED on X Plot of residuals against predicted to asses homoscedasticity
49
What is ZPRED? - (2)
(the standardized predicted values of the dependent variable based on the model). These values are standardized forms of the values predicted by the model.
50
What is ZRESID? - (2)
(the standardized residuals, or errors). These values are the standardized differences between the observed data and the values that the model predicts).
51
A plot of SRESID (studentised residuals) on y axis and ZPRED on x axis will show up any
heteroscedasticity also
52
SPSS in multiple linear regression gives descriptive outcoems which is - (2)
* basics means and also a table of correlations between variables. * This is a first opportunity to determine whether there is high correlation between predictors, otherwise known as multi-collinearity
53
SPSS also gives summary of overall model for example whether model is successful in predicting
record sales
54
In model summary of SPSS, it captures how the model or models explain
variance in terms of R squared, and more importantly how R squared changes between models and whether those changes are significant.
55
Diagram of model summary
56
What is the measure of R^2?
measure of how much of the variability in the outcome is accounted for by the predictors
57
The adjusted R^2 gives us an estimate of
fit in the general population
58
The Durbin-Watson statistic if specificed tells us whether the - (2)
assumption of independent errors is tenable (value less than 1 or greater than 3 raise alarm bells) value closer to 2 the better = assumption met
59
SPSS output for MR = ANOVA table which performs
F-tests for each model
60
SPSS output for MR contains ANOVA that tests whether the model is
significantly beter at predicting the outcome than using the mean as a 'best guess'
61
The F-ratio represents the ratio of
improvement in prediction that results from fitting the model, relative to the inaccuracy that still exists in the model
62
We are told the sum of squares for model (SSM) - regression line in output which represents
improvement in prediction resulting from fitting a regression line to the data rather than using the mean as an estimate of the outcome
63
We are told residual sum of squares (Residual line) in this output which represents
total difference between the model and the observed data
64
DF for Sum of squares Model for regression line is equal to
number of predictors (e.g., 1 for first model, 3 for second)
65
DF for Sum of Squares Residual for MR is - (2)
Number of observations (N) minus number of coefficients in regression model (e.g., M1 has 2 coefficents - one for predictor and one for constant, M2 has 4 - one for each 3 predictor and one for constant)
66
The average sum of squares in ANOVA table is calculated by
calculated for each term (SSM, SSR) by dividing the SS by the df. T
67
How is the F ratio calculated in this ANOVA table?
F-ratio is calculated by dividing the average improvement in prediction by the model (MSM) by the average difference between the model and the observed data (MSR)
68
If the improvement due to fitting the regression model is much greater than the inaccuracy within the model then value of F will be
greater than 1 and SPSS calculates exact prob (p-value) of obtaining value of F by change
69
What happens if b values are positive?
there is a positive relationship between the predictor and the outcome,
70
What happens if the b value is negative?
represents a negative relationship between predictor and outcome variable?
71
What do the b values in this table tell us what relationships between predictor and outcome variable? (3)
Indicating positive relationships so as advertising budget increases, record sales increases (outcome) plays on ratio increase as do record sales attractiveness of band increases record sales
72
The b-values also tell us, in addition to direction of relationship (pos/neg) , to what degree each
predictor affects the outcome if the effects of all other predictors are held constant:
73
B-values tell us to what degree each predictor affects the outcome if the effects of all other predictors held constant e.g., advertising budget - (3)
(b = 0.085): This value indicates that as advertising budget (x) increases by one unit, record sales (outcome, y) increase by 0.085 units. This interpretation is true only if the effects of attractiveness of the band and airplay are held constant.
74
Standardised versions of b-values are much more easier to interpret as
not dependent on the units of measurements of variables
75
The standardised beta values tell us that
the number of standard deviations that the outcome will change as a result of one standard deviation change in the predictor.
76
The standardized beta values are all measured in standard deviation units and so are directly comparable: therefore, they provide a
a better insight into the ‘importance’ of a predictor in the mode
77
If two predictor variables (e.g., advertising budget and airplay) have virtually identical standardised beta values (0.512, and 0.511) it shows that
both variables have a comparable degree of importance in the model
78
Advertising budget standardised beta value of 0.511 shows (with SD of 485.655) shows us - (2)
advertising budget increases by one standard deviation (£485,655), record sales increase by 0.511 standard deviations. This interpretation is true only if the effects of attractiveness of the band and airplay are held constant
79
The confidence intervals of unstandardised beta values are boundaries constructed such that
95% of these sampels these boundaries containn true value of b
80
If we collected 100 samples and calculated CI for b, we are saying that 95% of these CIs of samples would contain the
true (pop) value of b
81
A good regression model will have a narrow and small CI interval indicating
value of b in this sample is close to the true value of b in the populatio
82
A bad regression model have CI that cross zero indicating that
in some samples the predictor has a negative relationship to the outcome whereas in others it has a positive relationship
83
In image below, which are the two best predictors based on CIs and one that isn't as (2)
two best predictors (advertising and airplay) have very tight confidence intervals indicating that the estimates for the current model are likely to be representative of the true population values interval for attractiveness is wider (but still does not cross zero) indicating that the parameter for this variable is less representative, but nevertheless significant.
84
If you do part and partial correlations in descriptive box, there will be another coefficients table which looks this like:
85
The zero-order correlations are the simple
Pearson's correlation coefficients
86
The partial correlations represent the
represent the relationships between each predictor and the outcome variable, controlling for the effects of the other two predictors.
87
The part correlations - (2)
represent the relationship between each predictor and the outcome, controlling for the effect that the other two variables have on the outcome. representing the unique relationship each predictor has with otucome
88
In this table , zero-order correlation is calculated by - (2)
variance of outcome explained by predictors divided by total (A+C)/(A+B+C+E)
89
Partial correlations in example is calculated by - (2)
unique variance in outcome (ignore all other predictors) explained by predictor divided by variance in outcome not explained by all other predictors A/A+E
90
Part correlations are calculated by - (2)
unique variance in outcome explained by predictor divided by total variance in outcome A/A+B+C+E
91
At each stage of regression SPSS gives summary of any variables that have not yet been
entered into the model.
92
If the average VIF is substantially greater than 1 then the regression
may be biased
93
M Tolerance below 0.1 indicates a
serious problem.
94
Tolerance below 0.2 indicates a
a potential problem
95
How to interpret this image in terms of colinearity - VIF and tolerance
For our current model the VIF values are all well below 10 and the tolerance statistics all well above 0.2; therefore, we can safely conclude that there is no collinearity within our data.
96
We can produce casewise diagnostics to see a (2)
summary of residuals statistics to be examined of extreme cases To see whether individual scores (cases) influence the modelling of data too much
97
SPSS casewise diagnostics shows cases that have a standardised residuals that are (2)
less than -2 or greater than 2 (We expect about 5% of our cases to do tha and 95% to have standardised residuals within about +/- 2.)
98
If we have a sample of 200 then expect about .. to have standardised residuals outside limits
10 cases (5% of 200)
99
What does this casewise diagnostic show? - (2)
* 99% of cases should lie within ±2.5 so expect 1% of cases lie outside limits * From cases listed, clear two cases (1%) lie outside of limits (case, 164 [investigate further has residual 3] and 179) - 1% which isconform to accurate model
100
If there are many more cases we likely have (more than 5% of sample size) in case wise then
broken the assumptions of the regression
101
If cases are a large number of standard deviations from the mean, we may want to in casewise diagnostics
investigate and potentially remove them because they are ‘outliers’
102
Assumptions we need to check for MR - (8)
* Continous outcome variable and continous or dichotomous predictor variables * Independence = all values of outcome variable should come from different participant * Non-zero variance as predictors should have some variation in value e.g., variance ≠ 0 * No outliers * No perfect or high collinearity * Histogram to check for normality of errors * Scatterplot of ZRES against ZPRED to check for linearity and homoscedasticity = looking for random scatter * Independent errors (Durbin-Watson)
103
Diagram of assumption of homoscedasticity and linearity of ZRESID againsr ZPRED
104
Obvious outliers on a partial plot represent cases that might have
undue influence on a predictor’s b coefficient
105
Non-linear relationships and heteroscedasticity can be detected using
partial plots as well
106
What does this partial plot show? - (2)
the partial plot shows the strong positive relationship to album sales. There are no obvious outliers and the cloud of dots is evenly spaced out around the line, indicating homoscedasticity.
107
What does this plot show? (2)
the plot again shows a positive relationship to album sales, but the dots show funnelling, There are no obvious outliers on this plot, but the funnel-shaped cloud indicates a violation of the assumption of homoscedasticity.
108
P plot and histogram of normally distributed
109
P plot for skewed distirbution histogram
110
What if assumptions for regression is volated?
you cannot generalize your findings beyond your sample
111
If residuals show problems with heteroscedasticity or non-normality then try to
transforming the raw data – but this won’t necessarily affect the residuals!
112
If you have a violation of the linearity assumption then you could see whether you can do l
logistic regression instead
113
If R^2 is 0.374 (outcome var in productivity and 3 predictors) then it shows that
37.4% of the variance in productivity scores was accounted for by 3 predictor variables
114
- In ANOVA table, tells whether model is sig improved from baseline model which is
if we assumed no relation between predictor variables and outcome variable – flat regression line no association between these variables)
115
This table tells us in terms of standardised beta values that (outcome is productivity)
holidays had standardized beta coefficient of 0.031 whereas cake had a much higher standardized beta coefficient of 0.499 which tells us that amount of cake given out much better predictor of productivity than the amount of holidays taken For pay we have a beta coefficient of 0.323 which tells us that pay was also a pretty good predictor in the model of productivity but slightly less than cake
116
What does this table tells us in terms of signifiance? - (3)
- P value for holidays is 0.891 which is not significant - P value for cake is 0.032 is significant - P value for pay is 0.012 is significant
117
What does this image show in terms of VIF?
o All below 10 here showing we are unlikely to have a problem with multicollinearity so we can not worry about that for this data
118
# [](http://) For hierarchical regression you press Next to add
another predictor - block 2 of 2
119
In ANOVA it is comparing M2 with all its predictor variables with
baseline not M1
120
To see if M2 is an improvement of M1 in HR we need to look at ... in model summary
change statistics
121
What does this change statistic show in terms of M2 and M1
M2 explains an extra 7.5% which is sig
122
Each of these beta values shown in table has an associated standard error indicating to what extent and used to determine (2)
values would vary across different samples, and these standard errors are used to determine whether or not the b-value differs significantly from zero
123
t-statistic can be derived that tests whether a b-value is
significantly different from 0. I
124
In simple regression, a significant value of t indicates that the but in multiple regression (2)
slope of the regression line is significantly different from horizontal, but in multiple regression, it is not so easy to visualize what the value tells us.
125
t-tests in MR is conceptulased as a measure of whether the
predictor is making a sig contribution to model
126
IN MR, if t-test associated with a b-value is significant (if the value in the column labelled Sig. is less than .05) then the
predictor is making a significant contribution to the model.
127
In MR, the smaller value of sig, the larger value of t the greater
contribution of that predictor.
128
For this output interpret whether predicotrs are sig predictors of record scales and magnitude t statistic on impact of record sales - (2)
For this model, the advertising budget (t(196) = 12.26, p < .001), the amount of radio play prior to release (t(196) = 12.12, p < .001) and attractiveness of the band (t(196) =4.55, p < .001) are all significant predictors of record sales. From the magnitude of the t-statistics we can see that the advertising budget and radio play had a similar impact, whereas the attractiveness of the band had less impact.
129
In regression it determines the strength and character of the relationship between
one DV (usually denoted as Y) and a series of other variable (known as IV)
130
What is example of contintous variable?
we are talking about a variable with a infinante number of real numbers within a given interval so something like height or age
131
What is an example of dichotomous variable?
variable that can only hold two distinct values like male and female
132
If outliers are present in data then impact the
line of best fit
133
Diagram of outliers
134
You would expect that 1% of cases to lie outside the line of best fit so in large sample if you have
one or two outliers then could be okay
135
Rule of thumb to check for outliers is to check if there are any data points that
are over 3 SD from the mean
136
All residuals should lie within ..... SDs for no outliers /normal amount of outliers
-3 and 3 SD
137
Which variables (if any) are highly correlated?
Weight, Activity, and the interaction between them are statistically significant
138
What does homoscedasticity and hetrodasticity mean? - (2)
Homoscedasticity: similar variance of residuals (errors) across the variable continuum, e.g. equally accurate. Heteroscedasticity: variance of residuals (errors) differs across the variable continuum, e.g. not equally accurate
139
P plot plots a normal distribution against
your distribution
140
DiP plot can check for
normally distributed errors/residuals
141
Diagram of normal, skewed to left (pos) and skewed to right (neg) of p-plots
142
Durbin-Watson test values of 0,2,4 show that... - (3)
* 0 = errors between pairs of obsers are pos correl * 2 = independent error * 4 = errors between pairs of observs are neg correl
143
A Durbin-Watson statistic between ... and ... is considered to indicate that the data is not cause for concern = independent errors
1.5 and 2.5
144
If R2 and adjusted R2 are similar, it means that your regression model
‘generalizes’ to the entire population.
145
If R2 and adjusted R2 are similar, it means that your regression model ‘generalizes’ to the entire population. Particularly for
for small N and where results are to be generalized use the adjusted R2
146
3 types of multiple regression - (3)
1. Standard: To assess impact of all predictor variables simultaneously 2. Hierarchical: To test predictor variables in a specific order based on hypotheses derived from theory 3. Stepwise: If the goal is accurate statistical prediction from a large number of predictor variables – computer driven
147
Diagram of excluded variables table in SPSS
* Tells that OCD interpretiotn of intrustrions would have not have a significant impact on model's ability to predict social axniety
148
What is multicollinearity?
When predictor variables correlate very highly with each other
149
When checking assumption fo regression, what does this graph tell you?
Normality of residuals
150
Which of the following statements about the t-statistic in regression is not true? The t-statistic is equal to the regression coefficient divided by its standard deviation The t-statistic tests whether the regression coefficient, b, is significantly different from 0 The t-statistic provides some idea of how well a predictor predicts the outcome variable The t-statistic can be used to see whether a predictor variables makes a statistically significant contribution to the regression model
The t-statistic is equal to the regression coefficient divided by its standard deviation
151
A consumer researcher was interested in what factors influence people's fear responses to horror films. She measured gender and how much a person is prone to believe in things that are not real (fantasy proneness). Fear responses were measured too. In this table, what does the value 847.685 represent?
The residual error in the prediction of fear scores when both gender and fantasy proneness are included as predictors in the model.
152
A psychologist was interested in whether the amount of news people watch predicts how depressed they are. In this table, what does the value 3.030 represent?
The improvement in the prediction of depression by fitting the model
153
When checking the assumption of the regression, the following graph shows (hint look at axis titles)
Regression assumptions that have been met
154
A consumer researcher was interested in what factors influence people's fear responses to horror films. She measured gender (0 = female, 1 = male) and how much a person is prone to believe in things that are not real (fantasy proneness) on a scale from 0 to 4 (0 = not at all fantasy prone, 4 = very fantasy prone). Fear responses were measured on a scale from 0 (not at all scared) to 15 (the most scared I have ever felt). Based on the information from model 2 in the table, what is the likely population value of the parameter describing the relationship between gender and fear?
Somewhere between −3.369 and −0.517
155
A consumer researcher was interested in what factors influence people's fear responses to horror films. She measured gender (0 = female, 1 = male) and how much a person is prone to believe in things that are not real (fantasy proneness) on a scale from 0 to 4 (0 = not at all fantasy prone, 4 = very fantasy prone). Fear responses were measured on a scale from 0 (not at all scared) to 15 (the most scared I have ever felt). How much variance (as a percentage) in fear is shared by gender and fantasy proneness in the population?
13.5%
156
Recent research has shown that lecturers are among the most stressed workers. A researcher wanted to know exactly what it was about being a lecturer that created this stress and subsequent burnout. She recruited 75 lecturers and administered several questionnaires that measured: Burnout (high score = burnt out), Perceived Control (high score = low perceived control), Coping Ability (high score = low ability to cope with stress), Stress from Teaching (high score = teaching creates a lot of stress for the person), Stress from Research (high score = research creates a lot of stress for the person), and Stress from Providing Pastoral Care (high score = providing pastoral care creates a lot of stress for the person). The outcome of interest was burnout, and Cooper’s (1988) model of stress indicates that perceived control and coping style are important predictors of this variable. The remaining predictors were measured to see the unique contribution of different aspects of a lecturer’s work to their burnout. Which of the predictor variables does not predict burnout?
Stress from research
157
Using the information from model 3, how would you interpret the beta value for ‘stress from teaching’?
As stress from teaching increases by one unit, burnout decreases by 0.36 of a unit.
158
How much variance in burnout does the final model explain for the sample?
80.3%
159
A psychologist was interested in predicting how depressed people are from the amount of news they watch. Based on the output, do you think the psychologist will end up with a model that can be generalized beyond the sample?
No, because the errors show heteroscedasticity.
160
161
The following graph shows: A. regression assumption has been met B. non linearity C. Hetrodasticity D. Hetrodasticity
A. Regression assumptions have been met
162