7. F-Distribution - Assumptions of ANOVA Flashcards Preview

PYB210 > 7. F-Distribution - Assumptions of ANOVA > Flashcards

Flashcards in 7. F-Distribution - Assumptions of ANOVA Deck (53):

What does the Null hypothesis tell us?

the proposition that all treatment means are equal

i.e. the IV has no effect on the DV


what odes the alternative (research) hypothesis tell us?

the proposition that at least one treatment mean differs from another treatment mean

i.e. the IV does have an effect on the DV


what is a sampling error

when populations have different means even through the null hypothesis is true


what should F equal when null hypothesis is true?

however this may not always be true due to sampling error


what does simulation study involve?

repeatedly drawing a same-sized set of number of random samples of the same n from a single population then calculating F for each test (this is called the Monte Carlo siulation)


what is the alpha level?

the probability level we adopt for concluding that it is unlikely that sampling error caused the observed difference in means


what is the conventional alpha level

a=5% (.05)


what do we conclude if we find that F_observed is greater than our F_critical?

then we conclude that there is a significant difference between at least two of the treatment means this reject the null hypothesis


what is the process of hypothesis testing based on?

probability not certainty.
and will therefore sometimes make errors


how often will we incorrectly reject the null hypothesis?

a% of the time


what is a type 1 error

when we incorrectly reject the null hypothesis when the null is true


what is a type II error rate

the probability of getting an F value less than the F critical when null is false.


how are type II errors signified?

β - beta


what happens when we lower the type I error rate?

we suffer an increase in the type II error rate (and vice versa)


what are the three assumptions of Individual groups ANOVAs?

1. independence of observations
2. normality of distributions
3. homogeneity of variance


what happens when we break assumptions of ANOVAs?

will lead to either an inflated or deflated estimate of the true BG or WG variability. thus inflate or deflate the obtained F-value (F=BG/WG)


what is does the independence assumption state?

that it is not possible to predict one score in the data from any other score.


how can this assumption be adequately met in a between groups experimental design?

-random assignment of participants to groups (level of IV0
- random selection of participants from population of interests
- Each participant contributes to only 1 score in the analysis (may be the mean of many observations)
- each participant's score is independent - i.e. not influenced by any other participant's score


what does the normality assumption state

the normality assumption states that the samples are drawn from normally distributed populations and that the error component is normally distributed within each treatment group (level of IV)


What is robust to breaches of the normality assumption?



what are the conditions for an ANOVA to be robust to normality assumptions?

-there are similar number of participants in each condition
- there are at least 10-12 participants in each condition
- the departure from normality (skewness or kurtosis) is similar in each condition


how to we determine whether the assumption of normality is breached?

can either:
-inspect frequency histograms for each experimental condition
- skewness
- kurtosis
- use SPSS


what do measurements of skewness indicate about normality of a distribution?

0 = normally distributed data (or any symmetrically distributed data)
positive/ negative values = distribution negatively or positively skewed


what do measures of kurtosis indicate about normality of a distribution?

3 = normally distributed data (or any distribution that dont have more outliers than normal distributions)


what is a less common method of fixing problems with normality?

transformation of the data


what is an outlier

an extreme score at one or both ends of our distribution


what can outliers do you our ANOVA>

can influence our results as our anova is based on a ration between and within group variance. Our variance measure is based on our SD from the mean, so an outlier can inflate our measure of variance, and it can also impact on our mean


what are the solutions to problems of outliers

- remove from the data
- transform the data to remove the influence of other outliers
- use a non parametric test
- bootstrapping techniques
- run analysis with and without outliers and see if they affect results. If not report this and report ANOVA as usual (use with caution)


what is a way to transform the data to remove an outlier?

using the winsorized samples technique


what does the winsorized samples tecnhique do?

samples replace extreme scores with the most extreme score left in the tail of the distribution


original data:

3, 7, 12, 15 ... 32, 33, 50, 75


3, 7, 12, 15 ... 32, 33, 33, 33


what is the homogeneity of variance assumption

As MS_within is a pooled error term, we need to ensure that variance within each of treatment conditions / groups is similar.


what is the rule of thumb for variance sizes?

the largest variance should be no more than 4 times the smallest variance


what are breaches of homogeneity of variance assumptions composed of?

very unequal cell sizes


what can breaches of homogeneity of variance assumptions impact?

type 1 error rate


what is the test for breaches of homogeneity of variance on SPSS?

Levene's Test


how can we deal with breaches of homogeneity of variance?

1. If you have equal cell sizes and breach is minor (i.e. largest group variance less than 4 × smallest), you can run an ANOVA as it is robust to minor breaches of homogeneity
2. Run the ANOVA but use a lower alpha level to control for the possible impact on the type I error rate
3. Use an alternate statistical test which does not have the homogeneity assumption (e.g. nonparametric test)
4. Transform the data to remove the heterogeneity and run the ANOVA on the transformed data
5. New computer intensive methods such as bootstrapping (not covered in this unit).


what does lowering the alpha level do?

reduces the type 1 error rate. thus the effect of the breach of homogeneity of variance can be reduced


what are parametric tests?

tests like t and F which make assumptions about the distribution of scores


what are nonparametric or distribution free tests?

tests that have less restrictive assumptions about the distributions used


how do most nonparametric tests work?

by converting each score to a rank thus removing the assumption associated with inferential statistics

the ranks are spread out evenly so the shape of the distribution will always be rectangular (=no normality assumption and no problems with outliers)

there are specific rank-order tests for various hypothesis-testing situations

in rank order tests we are coparing mean ranks rather than mean scores to test the hypothesis. that samples are drawn from identical populations. Hence using medians rather than means is more appropriate when describing group differences


what statistic does the Kruskal-Wallis test use?

chi square


what does a significant chi square indicate?

significant difference between groups (because it is smaller than the significnace number)


what does data transformation involve?

performing an identical mathematical operation on all the scores.


what do the transformations do to the distribution of scores

it changes the shape of the distribution of scores


what can a suitable transformation do>

reduce heterogeneity of variance
achieve normality


what are some types of common transfomraitons?

-Logarithmic (strong skews, outliers and breaches of homgenity)
-Square-root (positive skew)
-Reciprocal or reflect (negative skew )
-Trimmed samples (outliers or heavy tailed kurtosis)


what do logarithmic transformations do to the size of values?

Log transformation compress large values but have less effect on small values.


what will logarithmic transformation do to the spread of scores

it will reduce the spread of scores this reducing skewness and decreasing the variability int he samples with large SDs


what are the steps in doing a transformation?

1. Identify problem (heterogeneity or skewness).
2. Find the transformation which minimises this problem. Check the assumption again on the transformed data.
3. Do not look for the transform which maximises F, but the one that minimises the assumption breach
4. Perform ANOVA using these transformed scores as the DV.
5. Transformed data is harder to interpret because it is no longer expressed in the original units of measurement.
6. Only do a transform when absolutely necessary.
7. You can run ANOVA on transformed and original data, and if you get the same result, report the original data as they are easier to interpret.


what are the advantages of data transformations?

can use parametric techniques on transformed scores


what are the disadvantages

• May not successfully normalize scores.
• Can distort meaning of data and result in loss of information.
• Risk of Type I and Type II errors (and hence power) is unclear.


what are the advantages of non parametric tests?

• Can use regardless of the shape of the original distributions.
• No parameter estimations so no assumptions.


what are the disadvantages of non parametric tests>

• Cannot be used for many complex situations.
• Logic of ranks does not always work when there are many "tied" scores.
• Can distort meaning of data and result in loss of information.
• Risk of Type I and Type II errors (and hence power) is unclear.