Week 2 Flashcards
(18 cards)
The t statistic
Comparing two means
The main purpose of a t-test is to test whether two group means are significantly (or meaningfully) different from one another
- paired samples
- independent samples
Independent sample t stat
- When there are two experimental conditions and different participants were assigned to each condition
- Otherwise called independent-measures, independent-means, between groups
Paired sample t stat
- When there are two experimental conditions and the same participants took part in both conditions of the experiment
- Otherwise called dependent-means, matched-pairs, repeated measures
Rationale of the t stat
- Two sample means are calculated
- Under the null hypothesis we expect those means to be roughly equal
- We compare the obtained mean difference against the null hypothesis (no difference)
- We use the standard error as a gauge of the random variability expected between sample means
- If the difference between sample means is larger than expected based on the standard error then:
- > There is no effect and this difference has occurred by chance
- > There is an effect and the means are meaningfully different
Assumptions of independent t stat
- Level of measurement (DV interval or ratio)
- Random sampling
- Normality
- Homogeneity of variance
Assumptions of repeated measures t stat
- Level of measurement (DV interval or ratio)
- Random sampling
- Normality
One-way ANOVA
- Comparing several means
- The main purpose of a one-way ANOVA is for situations where we want to compare more than two conditions
E.g.,
Low, medium, and high intensity exercise and mood - One-way ANOVA tests the hypothesis that 3 or more means will be the same
One-way ANOVA and not multiple t tests
- Familywise error rate (FWER)
- For a single comparison using ๐ผ=.05 the probability of a type 1 error is 5%
With the addition of another comparison using ๐ผ=.05:
๐ผ=1โ(1โ๐ผ)แถ
๐ผ=1โ(1โ.05)ยฒ
๐ผ=1โ(.95)ยฒ
๐ผ=1โ.9025
๐ผ=.0975
The probability of a type 1 error is almost 10% - With 3 comparisons (approx. 14% chance of type 1 error)
- With 4 comparisons (approx. 19% chance of type 1 error)
One-way ANOVA details
- The ANOVA produces an F-statistic or an F-ratio
- The F-statistic represents the ratio of the model to its error
- ANOVA is an omnibus test
- > Tests for an overall experimental effect
- Significant F-statistic tells us that there is a difference somewhere between the groups but not where this difference lies
F test
Variability between groups / Variability within groups
which is equal to
Random Error + Treatment Effect / Random Error
-> if null is true, treatment effect will be 0, therefore F will equal 1
-> as treatment effect increase, F will increase as well
Mean squares in a one-way ANOVA
Mean Squares
- Calculated to eliminate the bias associated with the number of scores used to calculate ๐๐
๐๐.๐ต = ๐๐.๐ต / ๐๐.๐ต
๐๐.๐ = ๐๐.๐ / ๐๐.๐
F-ratio calc
๐น = ๐๐.๐ต / ๐๐.๐
One-way ANOVA assumptions
- Level of measurement
- Random sampling
- Independence of observations
- Normal distribution
- Homogeneity of variance
Level of measurement assumption
Dependent variable must be measured at the interval or ratio level
Random sampling assumption
Scores must be obtained using a random sample from the population of interest
Independence of observations assumption
- The observations that make up the data must be independent of one another
- Violation of this assumption is very serious as it dramatically increases the Type 1 error rate
Normal distribution assumption
- The populations from which the sample are taken is assumed to be normally distributed
- Need to check this for each group separately in one-way ANOVA
Homogeneity of variance assumption
- Samples are obtained from populations of equal variances
- ANOVA is fairly robust to this violation โ provided the size of your groups are reasonably similar