WEEK 4: One-Way Anova (Independent) Flashcards

1
Q

Lecture Overview:

A

> From t-test to ANOVA
1-way ANOVA: a conceptual approach
Following up a significant ANOVA result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Learning Objectives:

A
  • Be able to report the results of a 1 way ANOVA
  • Understand and be able to explain why a Bonferroni correction is necessary
  • Understand and be able to explain the concepts of ‘equal variance’ and ‘sphericity’
  • Understand and be able to explain the difference between a within and between participants design from the perspective of the calculation used in each type of ANOVA
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

From T-Tests to ANOVAs

Why are ANOVA’s necessary??

A

T-test:
1 IV with 2 conditions or groups
1) Music, 2) Silence
1) Caffeine, 2) Placebo

ANOVA:
1 Factor with more than 2 levels
1) Rock music, 2) Classical music, 3) Silence
1) Caffeine, 2) Alcohol, 3) Placebo

Until now you’ve been referring to an IV with 2 conditions or 2 groups. With an ANOVA the language changes a little and we now talk about a ‘Factor’ with a number of levels.

IV e.g. music = Factor Conditions e.g. rock = Levels

ANOVAs are necessary because if we were to just use t-tests, multiple tests would have to be conducted. If we had 3 levels to test them we could treat the without music condition as a control and do 2 t tests. Or we could do 3 t tests to also test classical music directly against rock music

With a cut off of .05 (p value) as our criterion for ‘significance’ we are likely to see an effect that doesn’t represent the population in 1 out of every 20 cases (5% of the time) and we won’t know if that is the first comparison we do or the last!

This means…
If we were to complete 3 separate t-tests…

t test 1 = 5% chance of error
t test 2 = 5% chance of error
t test 3 = 5% chance of error

So each of our comparisons has a 5% chance of being erroneous and we’re testing the same samples repeatedly which results in an increase of 15% in our chances of thinking that we have an effect when we don’t.

With 10 t tests we would make a type I error 50% of the time

We fix this using bonferroni correction…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Bonferroni correction

A

But if we divide the .05 by the number of comparisons that we do we get ourselves back to a total of .05 probability of making a type I error overall.

t test 1 = 5% chance of error
t test 2 = 5% chance of error
t test 3 = 5% chance of error

.05 / 3 = .016

If you had 4 t tests what would you divide .05 by?… 4
What would the alpha be with 10x t tests?… 10 = .005

So we could just conduct a number of t tests and lower our cut off criterion for significance. Which would be OK if you just have 3 t tests to run but what if you had 9 or 10 t tests to run, for a start you’d end up with a criterion for significance that was extremely low and secondly you could find that nothing is significant.

So to avoid having to run multiple t tests you can put all your variables into one test that we call an ANOVA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why we use ANOVAs: Basic terms

A

Basically, without using ANOVA the more conditions you have = more t-tests need to be conducted, so the more the point at which you can accept significance (usually 0.05 p value) needs to be lessened, so harder to say results are significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

1-Way Independent ANOVA

Introduction

A

> The ANOVA outcome indicates whether there is a difference between any of your conditions
t tests are conducted following a significant ANOVA
If the ANOVA is not significant t tests are not conducted

This analysis will test all your variables against each other and tell you whether it is likely that any of your comparisons are significant

So if the results of the ANOVA are significant some or all of your comparisons are likely to be significant BUT if the ANOVA is not significant then your comparisons won’t be significant (if the number of comparisons are taken into account).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The aim of stats tests

A

Remember the aim of stats tests is to divide the effect by the error and produce a ratio that tells us whether the effect we have is bigger than the error

The ANOVA does exactly this

The ANOVA results in a calculation that takes the effect, which is the difference between the groups and divides it by the error which is the difference within the groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Variance Recap:

A

It is the variability of the data, how spread out the data is around a certain point.

Calculated by determining how much each score differs from the mean average of the sample, squaring each value, then adding then all up and dividing by the number of scores (squaring accounts for there being both negative and positive values)

  • Dividing by n gives the variance in the sample (when using whole population)
  • Dividing by n-1 gives an estimate of variance in the population when working with a sample of a population (mean square)

It is difficult to see how variance values relate to the measure you have (the dependent variable) so you take the square root in order to get back to where you started before squaring everything - this is standard deviation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Independent One-Way ANOVA

Between groups variance

A

So with an ANOVA, variance is calculated in the same way but both between the groups and within the groups.

The ‘between groups variance’ is the difference between the groups, and we assume that this is due to the manipulation that we have applied to the groups (i.e. asking them to complete a task under different conditions).

We assume that any differences found here are due to the fact that Ps did different things in each of the groups

This is also known as the ‘treatment effect’
- the effect (difference in performance between groups) we expect to see as a result of manipulation of the IV/ factor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

REMEMBER…

Elements of error in between participants design

A

If you have a between participants design, in your between groups variance you have the treatment effect, but you also have individual differences because you have different people in the groups and potential sampling error, which remember could be the result of sampling from an unintended sample or just from the fact that we use a sampling methodology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Between group variance

Calculation

A
  • You calculate the grand mean by working out the mean of each group mean (add up the three group/condition means and / 3)
  • The between groups variance is the difference between the grand mean and each group mean

There is another form of variance in an ANOVA which is referred to as the within group variance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Within group variance

Calculation

A

There is another form of variance in an ANOVA which is referred to as the within group variance.
- This is the variance that you have between the scores of each participant in each of your groups and the group mean.

In your within groups variance you also have individual differences and potential sampling error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The F-ratio for an Independent ANOVA

A

**Calculation:
So in the case of our independent ANOVA we’ve got the treatment effect, individual diffs and any sampling error in our between groups variance

And we’ve got individual diffs & sampling error in our within groups variance also.

We therefore have individual diffs and error included in both sets of variance but the between groups variance also contains the treatment effect

F = Between g variance / within g variance

F = Treatment effect + Ind. diff. + Error variance / Ind. diff + error variance

SO, the sum of the factors on top should be greater than the bottom, so when divided if the value is less than one… there is more error than effect.

So like the t-value in t-tests, if the f-value is less than one then its telling you that you have more error than effect.

All of these values can in fact be found on our output (“ANOVA”)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Reporting an ANOVA

A

F (between df, within df) = [F value], p= [p value]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the 1-way Independent ANOVA doing?

A

> Tests the null hypothesis that all groups are the same - no difference between groups, no effect in population
Omnibus test (another name for it, because it’s a test of a lot of things at once)
- Is there an overall effect? (more effect than error)
- A significant value (low p-value, f-value greater than 1) indicates that there is a low probability that differences would be observed if there is no effect in the population

Doesn’t tell us where the differences come from…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The ANOVA tells us whether there’s a difference, it doesn’t tell us where this difference is though…

Where can we look?

A

We could look at our error bars

Imagine we’ve got identical mean values for the classical and rock groups but a considerably different mean for our silence group.

It looks likely that there would be a difference between our silence group and the other two groups but not between the rock and classical groups.

But in order to be sure we have to do some follow up comparisons (e.g. t tests). – “post hoc tests”

Remember: If the ANOVA is not-significant then no further action is required, if it is significant then follow up tests should be conducted

17
Q

Familywise Error Rate

A

Familywise error rate - the fact that running a number of tests increases the chance of error

Solution to this problem: Bonferroni correction
- Divide the alpha (cut off) by the number of comparisons
.05/3 = .016
- Use the new cut off to decide whether to reject the null hypothesis or not

e.g. We we could run 3 t-tests to compare each level and we bonferroni correct them at an alpha of .016

18
Q

Levene’s Test for Equality/ Homogeneity of Variances

A

Levene’s test for equality of variances tells us whether the variances between the groups we are comparing is significant or not

It’s part of the T-test but NOT part of the ANOVA output

We want the levene’s test to NOT BE SIGNIFICANT in order to assume that the variances/ SD of the groups are equal

If the levene’s test is not significant then we read from the top row (ideal), if it is significant then we read from the bottom row

The p value between classical and rock is 1.00 (not significant), this means this would be seen 100% of the time when there’s no effect in the population (null hyp is true)

Our Bonferroni corrected post hoc tests suggest that the comparisons between silence and rock and classical music are significant (sig better performance in silence compared to rock and classical music, but there’s no difference in performance between rock/ classic music)

19
Q

Points to note:

A
Assumptions of independent ANOVA:
> Continuous (Scale) dependent variable
> Normal distribution
> No outliers (extreme scores)
> *Equal variance*
20
Q

Assumptions: Equal Variance

Levene’s continued

A

The variances in each of the three groups differ – the question is whether the degree of deviation is sufficiently different to affect the outcome of the ANOVA.

As you now understand the ‘error’ variance is calculated by working out the variance within each group so if these values differ dramatically it will affect the accuracy of the calculation.

Consequently it is important that the variances are roughly equal and Levene’s test will give you the answer!

…Must check the homogeneity of variances, check the significance - you want a non-significant difference, so there is no sig difference in the variances (the spread) of data between the groups

If its significant then you have significant differences in variance between the groups, so have violated one of the assumptions of running the ANOVA test

…If you do end up with a significant levene’s, you can run a non-parametric ANOVA