One-Way ANOVA Assumption Repeated Measure Post Hoc Effect Sizes Flashcards

1
Q

What is variance?

A

is the average distance each value in the sample is from the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In ANOVA, what is variance?

A

Identifying the relative location of several group means.

The mean of each sample is each data point and we are looking at the variance of that instead of the variance of the data points

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How is ANOVA similar to regression in terms of means?

A

ANOVA talking about means of groups - regression not about group mean variance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What types of variance does ANOVA take into account?

A

Between and within group variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When seeing if an ANOVA is statistically significant, what does statistical significance mean?

A

Tells us not just if means from more than one group are different BUT if the differences in those means is GREATER than you would expect by chance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How are ANOVAs and T tests different?

A

Because in an ANOVA we are wanting to see if means between MORE THAN TWO GROUPS are statistically significant. T test limited comparing only two groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What can you notice regarding research questions for ANOVAs?

A

The DV is continuous

There are three groups at least - three or four different groups you would put people into and then compare these groups which would each have their own means and variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is ANOVA?

A

Like a t-test in that compare means between gorups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the null hypothesis for an ANOVA?

A

That the means for all groups are identical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

If we reject the null hypothesis for an ANOVA, we are saying…

A

The information we have is that at least one group mean IS different, but we don’t know which one yet - have to figure it out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A statistically significant ANOVA indicates
A. Not all means are identical
B. At least one group mean differs from the rest
C. More than one group means may be different
D. All of the Above

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

True or false: In an ANOVA we can tell which group has a different mean through reading the output?

A

NO. Must do a post hoc test, because otherwise we just know at least one of the groups is not like the other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why can we not compare each group mean (use multiple t-tests) with a T test to look at multiple group means?

A

Because we can’t look at more than one IV at a time
It would also inflate our Type 1 error rate (chance of rejecting null hypothesis when we should not)
When assumptions are met, ANOVA is more powerful than t-tests for two or more groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

WHy is it not a good idea to run multiple tests without correcting for multiple comparisons?

A

Because the more tests you run without good reason, bigger change you’ll get a type 1 error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

True or false: if you run thousands of tests with alpha level .05, eventually you would get a significant result by chance

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the family-wise alpha error rate?

A

The chance of having at LEAST one false positive across a series of comparisons or tests

It is dependent on the deciison wise error rate AND the number of comparisons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the Decision wise error rate?

A

The probability of a false positive within a SINGLE comparison or test.

The family wise error rate is dependent on the decision wise error rate AND the number of comparisons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the difference between the decision wise error rate and the family wise errror rate?

A

The family wise refers to having at least one false positive across series of tests, whereas decision wise is having a false positive within a SINGLE test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Why is ANOVA more powerful than T test for more than two groups when assumptions are met?

A

Because it uses pooled variance estimates across all groups, and you have a larger sample size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

True or false: ANOVA allows us to evaluate all means in single hypothesis test, while keep alpha level at 0.5

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

If we ran three tests - so three comparison groups, family wise error rate according to math would be .14. ANOVA evaluates relative location of all group means at once so we can just do single test to keep error rate 5%. What is the only downside of this test?q

A

Only tells us if groups means are different, NOT which one or how many are different

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

When you see:
One”way” ANOVA,
or Two”way” ANOVA,
what does the One or Two refer to?

A

The number of FACTORS (aka independent variables) in the test.

So one way is a single factor ANOVA. It does not refer to the number of groups whose means we are comparing, because that can be any number of groups. But instead, anything above a single factor ANOVA is looking at how more than ONE IV affects the DV.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

You have this research question:

• Which treatment is most effective for decreasing depressive symptoms? Cognitive-behaviour therapy, meditation, the combination of these two treatments?

Is this one way or two way anova?

A

ONE way because the only GROUP VARIABLE (IV/FACTOR) is treatment type.

If we were also interested in how gender affected a decrease in depressive symptoms, and how gender and treatment type interacted, gender would be a second factor in two way anova.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

• Which treatment is most effective for decreasing depressive symptoms in men and women? Cognitive-behaviour therapy, meditation, the combination of these two treatments?

A

Two way ANOVA - two factors - treatment type/gender

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

You have this research question:

Which treatment is most effective for decreasing depressive symptoms in men and women? Cognitive-behaviour therapy, meditation, the combination of these two treatments?

What analysis would you run?

A

Two way ANOVA - two factors - treatment type/gender

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

In ANOVA, what is the unexplained variance due to chance called?

A

Residual Sum of Squares

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

In ANOVA, what is the explained variance - how much variability accounted for by the independent variable?

A

Model Sum of Squares, also known as variability between group means

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Variability between group means in anova is also known as..

A

Model Sum of Squares

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

In ANOVA what is the total variability between scores called

A

Total Sum of Squares

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is the between groups variability also known as?

A

Model Sum of Squares

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What does this equation tell us?

SS(T) = SS(M) + SS(R)

A

The TOTAL sum of squares is equal to the variability between group means PLUS the residual variability (everything we couldn’t explain)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

To determine whether treatment group means are significantly different, we compare _____

A

amount of varibaility explained by the model SS(M) COMPARED TO residual variability SS(R).

So EXPLAINED compared to UNEXPLAINED variance.

And for effect to be significant, MORE variability has to be explained by the model than by residuals otherwise its a bad model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How do we determine whether the model explains more variability than the residuals?

A

The F Ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What does the F ratio tell us?

A

Whether the model explains more variability than the residuals. If it’s a signficiant effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

The F Ratio equation is
F = MS(M) divided by MS (R).

So similar to the total sum of squares with SS(M) and SS(R), BUT.. how are they different?

A

The F ratio is looking at the MEAN sum of squares instead of just the sum of squares.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Specifically, how do you go from the MS(M) , mean sum of squares for the model, and MS(R), mean sum of squares for residuals,

FROM the sum of squares (M) for the model variance for between group means (SS M) and residual variability (SS R) as seen in equation for total sum of squares?

A

The MS(M) is sum of squares for the model, divided by degrees of freedom for the model (k-1)

The MS(R) is sum of squares for residuals, divided by degrees of freedom residuals (n - k)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

If a numerator is smalled than a denominator, we can conclude that the treatment has…

A

no effect because the F will be smaller than 1 therefore the test is non significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

An F score smaller than one indicates

A

Non significance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

An F ratio larger than 1 indicates..

A

Difference in treatments, or in group means, are greater than chance.

40
Q

If a numerator is LARGER than the denominator when looking at the F ratio equation, that indicates..

A

Significance

41
Q

If the mean sum of squares residuals for the model in a one way anova is SMALL, what does this indicate?

A

Very little variability within groups

42
Q

If the mean sum of squares for the model in a one way anova is large, what does this indicate?

A

Large difference in group means

43
Q

If an F ratio is large, the variability attributable to the model is _____ than the variability that occur simply due to chance, or error

A

greater

44
Q

An F ratio can be large if:

A. The mean sum of squares for the model is large
B. The mean sum of squares residuals is small
C. Both A and B

A

C

45
Q

If the F ratio is larger than the critical value for our set alpha level (usually always .05) it tells us 1 or more group means are..

A

statistically significantly different from each other

46
Q

If the F ratio is larger than the critical value for our set alpha level (usually always .05) it tells us 1 or more group means are..

A

statistically significantly different from each other

47
Q

Why is it so important to check assumptions in ANOVA?

A

If our data violates these assumptions the test results may not be valid.

48
Q

The assumptions assumed in ANOVA are:

A

Independence of observations (observations within each sample are independent)
Interval/Ratio Data for DV
Normality - the parameter are residual distributions
Homogeneity of Variance

49
Q

What does the independence of observations assumptions mean?

A

Observations within each sample are independent - meaning the scores, or data points, don’t influence other participant scores or variables. Also means a participant can’t be in more than one group. Also groups themselves must be independent – scores in one group don’t depend on, or influence scores in another group.

50
Q

What does Interval/Ratio data assumption mean

A

We have continuous data for outcome variable. Interval data as DV. Examples are things numerical like temperature. Ratio data is height, weight, duration of some behaviour. IV in an ANOVA should be categorical. If one of these not true, run different test

51
Q

What does normality assumption for ANOVA mean

A

Check normality within each group (QQ PLot or histogram for each cell). . Also check for outliers and decide what to do with them as well as exceptions to normality – things to help with that.

52
Q

What does homogeneity of variance assumption for ANOVA mean

A

Variance of each group must be approximately equal, so homogeneous. Not unequal variance, aka heterogeneous

Can check this with Levene’s test - should be non significant if p is more than .05

53
Q

If that assumption of normality is violated in an ANOVA, is this an issue?

A

No, as if sample >100, robust due to CLT

54
Q

If sample sizes are not equal of small in ANOVA, it is more sensitive to violations of normality. True or false?

A

True, but it depends on the smallest group sample size

55
Q

When checking assumptions for ANOVA, you notice the sample sizes are not equal or small. What could you do to deal with this?

A

Transform your data to see if residuals are closer to a normal distributon after the transformation

Use a non-parametric test like the Kruskal-Wallis test

56
Q

How could you transform your data in an ANOVA?

A
  1. By adding a constant to each number e.g x + 1
  2. Converting raw scores to z-scores (x - m)/SD
  3. Mean centring (x - m)
57
Q

In ANOVA, if sample sizes in each group are equal BUT variances are unequal, is this OK?

A

Yes. CLT

58
Q

The assumption of homogeneity of variance in an ANOVA is violated if the p value for Levene’s test is above or below .05?

A

Below .05

59
Q

If you sample sizes are not equal in ANOVA, then it is not robust and we need to account for heterogeneity of variance. True or false?

A

True

60
Q

If the P value for Levene’s test is less than 0.05 , therefore need to account for heterogeneity of variance of homogeneity has been violated, What values can we use instead of F?

A

Brown-Forsythe, Welch F, DF, and p values instead of regular F. Will give you corrected F statistics.

61
Q

Independent T test is like a special case of ..

A

One way ANOVA

62
Q

Dependent T test or repeated measures T test is like a special case of..

A

Repeated measures ANOVA

63
Q

Therefore, the observations within each sample ___ neccessarily need to be independent

A

Don’t

64
Q

So how do we unpack the results of an ANOVA?

A

Post hoc test

65
Q

Why do we need to do post hoc tests?

A

Because an ANOVA output only tells us if at least one group mean, for example one treatment, was different. If the result is not statistically significant then you go no further. But if it IS stat significant, need to unpack it with post hoc.

66
Q

Just like in a T test, with ANOVA why do we need to look at the direction of effects?

A

Because it is possible something like a treatment may have made scores worse.

67
Q

Why is significant results in ANOVA not enough in assess differences in group means?

A

Because need to look at direction as to whether things improved or got worse

68
Q

Why do we need more information than just what the F ratio gives us regarding whether the group means were different?

A

We don’t know which means differ from another. We need to run a follow up to test to find where the differences lie

69
Q

What tests can we run to find out exactly where the group mean differences are?

A

Multiple t-tests: inflates our chances of making an error
Orthogonal contrasts/comparisons: planned a priori & hypothesis driven
Post hoc tests: not planned or hypothesized; compare all pairs of means
Trend analysis

70
Q

Why does a post hoc analysis need to account for multiple comparisons?

A

Because we are running multiple tests and the more run, the greater chance to return a significant result by chance rather than real difference

71
Q

The family wise error rate, which is the chance of having at LEAST one false positive across a series of comparisons or tests, is higher in post hoc analysis. What do we do to address this?

A

Make the type 1 error threshold (decision wise alpha) stricter

72
Q

To accept effects as significant, what do we do to the decision wise alpha?

A

Make it stricter, so more conservative than .05.

73
Q

What is the Bonferroni method regarding post hoc analysis with decision wise alpha?

A

It is an equation where we divide the alpha level (.05) by the number of tests we have done.

74
Q

If you have done six tests in a post hoc analysis, how would you account for multiple comparisons and treat the alpha level?

A

You would divide the alpha level (.05) by 6.

75
Q

If you are comparing five different pairs of group in your number, that is the number of individual t tests you are running in your post hoc analysis. What does the p value need to be for your result to be statistically significant?

A

.01 (alpha = .05/5)

76
Q

In correlation and regression, r squared tells us the explained variance between two correlated variables. What statistics will tell us this in ANOVA/T test?

A
Eta (n2)
Omega squared (w2)

Cohen’s D - t test

77
Q

True or false: null hypothesis significance testing is ALL or NONE

A

TRue

78
Q

Does statistical significance indicate praoctical signifcance?

A

No

79
Q

Significance testing rules out chance but p values dont tell us about ____

A

effect sizes

80
Q

Significance testing rules out chance but p values dont tell us about ____

A

effect sizes

81
Q

What is the problem with eta-squared (n2) as an effect size?

A

biased effect size estimate as overestimated proportion of variability accounted for

82
Q

According to Field (2013), what is a small/medium/large effect for eta squared?

A
small = .01
medium = .09
Large = .25
83
Q

Are there strict cut offs for categorising effect sizes?

A

NO

84
Q

For a repeated measures ANOVA, what effect size (eta) would you report?

A

Partial eta squared, whch is the sum of squares subject variability has been removed from the denominator

85
Q

What is a better option than eta squared that is less biased?

A

OMega squared.

86
Q

WHy is omega squared a better effect size for ANOVA ?

A

It is more accurate as uses more info from the data including degrees of freedom. This is why it is a better estimate for small sample size and takes this into account.

87
Q

If you have a small sample size ANOVA, which effect size might be best to use?

A. ETA
B. Cohen’s D
C. Omega squared
D. R squared

A

C

88
Q

According to Kirk, what is a small to large effect size for omega squared?

A

small - .01
Medium - .06
Large - .14

89
Q

As an effect size, what does Cohen’s d tell us?

A

The degree of separation between two distributions (how far apart in standardised units) are the means of two distributions

90
Q

What is the cohen’s d equation?

A

d = mean difference/standard deviation

91
Q

According to Cohen’s D, what is a small to large effect size

A

small - .20
Medium - .50
Large - .80

92
Q

How is cohen’s d effect size different to ETA, omega and R?

A

It ‘s telling us the DEGREE of separation between two distributions. The standardised difference - in units. That’s simply the mean difference over the standard deviation.

93
Q

What type of variance is in the denominator for cohen’s d?

A

p = pooled variance. The Pooled standard deviation

94
Q

Which effect size is useful for doing meta analysis and comparing effect sizes for different studies?

A

Cohen’s d

Because the calculation can be used to transform it to other measures like correlation coefficient

95
Q

What do we need to consider when choosing which effect size to report?

A
  1. Sample size
  2. Are you looking at overall effect size
  3. Are you looking at difference in two groups in post hoc analysis