Final Exam Flashcards

1
Q

why Analysis of Variance (ANOVA) is needed to examine differences between multiple groups of means

A

To reduce the family-wise error rate
This suggests the Null Hypotheses is true when it is not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Understand the philosophy underlying ANOVA

A

Can help to compare multiple treatments and measure effect size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Including terminology such as between-subjects and familywise error rate

A

Between-Subjects = diferent groups exposed to the same IV
Family-Wise = Assuming Null is true when it is not, known as Type 1 Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Know the assumptions that underpin ANOVA

A
  • Normality
  • Histograms & Q-Q Plots & graphs
  • Skewness & Kurtosis
  • Homogeneity of Variance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Be able to interpret post-hoc testing in SPSS

A
  • Bonneferoni
  • Priori tests
  • Calculate effect size
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

t-tests

A
  • Try to answer questions comparing two dependent variables
  • Is there a significant differenct between two mean variables
  • independent samples test
    e.g. rehydration vs carbohydrates
    or any combination of rehydration, carbohydrates, physio or combination
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Problem with multiple comparisons (t-tests)

A
  • Error rate per comparison increases Type 1 Error
  • Or rejecting the null hypothesis when it is true
  • Familywise error rate:
  • Probability of making more Type 1 errors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Familywise Error Rate

A

The probability of making one or more Type 1 errors in a set of comparisons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Type 1 error

A
  • Rejecting the null hypothesis when it is true
  • Or we say something significant is happening but actually the null happening is true.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Alpha Value

A

= 0.05
* An acceptable level of error
* There is a 5% chance you have calculated a Type 1 error
* Connected to p value
* probability of finding a 5% magnitude difference if null is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Analysis of Variance

A
  • ANOVA
  • Statistical procedure in psychology
  • guards against familywise error
  • Can analyse differences between more that one mean
  • t-tests only does two
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

One way Between-Groups ANOVA

A
  • Will tell you if there are significant differences in DV means across 3 or more groups
    *** Invented by Sir Ronald Fisher
  • F Statistic**
  • Post-hoc tests can be used to find where the differences are
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When to use One-Way Between-Groups ANOVA

A
  • 1 Dependent variable is continuous
  • 1 Independent variable has three or more levels
  • Different participants in each level of the IV
    e.g. different football players in different groups
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Dependent variable is continuous

A

A variable with many possible values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Benefits of Between Groups Anova

A

Its possible to reduce the practice effect
Its possible to reduce the carry over effect.
e.g. physio could have carry over effects or other DVs could produce results when subjects practice doing the same test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Advantages of ANOVA

A

Can be used in wide range of experimental design
* Independent groups
* Repeated Measures
* Matched Samples
* Designs involving mixtures of independent groups and repeated measures
* More than on IV can be evaluated at once

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Independent Groups Design

A

Between-Groups Design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Repeated Measures

A

Within Groups Design
Same people in each level of the IV
e.g. Each person does physio, carbs and rehydrate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Matched Samples

A
  • Control for confounding variables
  • Matching people to outcomes that might be important
    e.g. age and experience could influence recovery time from sport activities, so match fit people and older people for true results
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Mixed Design ANOVA

A
  • Involves mixed IV Groups
  • Involves Repeated Measure samples
  • VCombination of a between-unit ANOVA and a within-unit ANOVA
  • Requires a minimum of two IVs called factors
  • At least one variables has to vary between-units and at least one of them has to vary within-units
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Adjusted Factorial ANOVA

A
  • More than one IV evaluated at a time
  • Much more sophisticated
    e.g. measuring recovery time and heart rate at the same time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

4 Assumptions of Between-Groups ANOVA

A
  1. DV must be continuous
  2. Independence: each participant must not influence the other
  3. Normality: Each group of scores should have normal distribution (No Outliers?)
  4. Homogeneity of Variance: approximatley equal variablity in each group
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How to Check for Normality

A
  • Kolmogorov-Smirnov/Shapiro-Wilds: p > 0.05
  • Skewness & Kurtosis
  • Histograms
  • Detrended Q-Q Plots
  • Normal QQ Plots
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Kolmogorov-Smirnov/Shapiro-Wilks:

A
  • Shapiro-Wilks: Small samples
  • Kolmogorov-Smirnov: larger samples
  • p > 0.05
  • Significant results require transformation
  • We want there to be little difference - less than 0.05
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Skewness & Kurtosis

A
  • if the z score of the statistic is < ±1.96 then it is normal
  • z score = Statistic/Std Error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Histograms

A
  • Follows the Bell Curve
  • Similar a bar graph
  • Condenses data into a simple visua
  • Takes data points and groups them into logical ranges
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Detrended Q-Q Plots

A
  • Horizontal line representing what would be expected if the data were normally distributed.
  • Demonstrated by equal amounts of dots above & below the line
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Normal Q-Q Plots

A
  • Plots data against expected normal distribution
  • Normality is demonstrated by dots hugging the line.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Null Hypothesis

A
  • Nothing to see here.
  • no significant difference between the averages
  • Any deviation in our sample is due to sampling error or chance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Alternative Hypothesis - (H1)

A

(H1): At least one of the means is different from the rest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Testing The Assumptions

A
  1. Convert Skewness and Kurtosis into z scores and < +-1.96
  2. Std Error should be < +-1.96
  3. Confidence Levels accuracy give or take upper and lower %
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Standard Error

A

should be < +-1.96

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

5% trimmed mean

A
  • Not so important for ANOVA or t-test
  • Removes oultiers
  • If 5% trimmed mean is different to mean then data has lots of inflential scores or outliers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Median

A
  • A better reflection of the average if the data is not normal
  • if same as mean and 5% mean indicates modal distribution or single hump
  • If different then could indicate multi modal distribution
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Uni Modal Distribution

A

When mean, 5% trimmed mean and median are the same number

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Variance

A
  • Give indication of variability in scores
  • Measure of the spread, or dispersion, of scores
  • Small variance indicates simmilarity of scores
  • Large variance indicate larger spread across the means
  • Used to measure Standard Deviation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Standard Deviation

A
  • Measure of variability when we report our mean
  • Gives indication of distrubution of scores in each group
  • Use this in our APA Write up
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Interquartile Range

A
  • Talks about 75th & 25th percentiles
  • Reports around the median
39
Q

z scores

A
  • Convert Skewness & Kurtosis into z scores
  • Divide the Statistic Result by the Std. Error result
  • Then compare to critical value <±1.96
40
Q

Testing Assumption of Normality

A
  • Komogorov-Smirnov - p > .05 - Large sample
  • Also Shapiro-Wilks - p > .05 - Small sample
    * W = Statistic / Sig.
  • Usually a p value
  • SPSS uses a Sig. value
41
Q

Transform Data

A

When data does not meet the assumptions of normality then data can be transformed

42
Q

Outlier

A
  • Data that is 3 Standard Deviation points from Normal Data
43
Q

Anova Example - Statistical Question

A

“Is there a significant difference among the average recovery rates of the four recovery methods. If so, what is the source of that significant difference?”

44
Q
A
  • Mu equal mean of data
  • If null is true there are no significant differences in the means of the groups
  • Any differences would be due to sampling error or chance
45
Q

Running an ANOVA

A

SPSS steps:
1. Analyse
2. Compare Means
3. One-way ANOVA
4 - Place DV in the dependent list section.
5 - Place the IV in the factor section.
6. OK

46
Q

Interpreting ANOVA

A
  • First thing is to find N
  • Then the mean for each group
  • Do differences happen due to sample error
  • Is it indicative of the population
  • Next find Standard deviation
  • Each mean and Std Deviation is used in APA Write up
47
Q

Homogeneity of Variance

A
  • Levene Statistic
  • Assumes all groups have equal variance
  • Is part of ANOVA Output
  • We want p > .05 to keep Null Hypothesis
  • ANOVA works on Means so use top row
48
Q

Anova Table

A
  • Sum of Squares
  • Divided by Degrees of Freedom
  • Gives us our Mean Square
  • Mean Square gives us the Average Variability
49
Q

The F Statistic

A
  • In and of itself is not all that meaningful
  • F = 1 means not much variability
  • We want more variablity between the groups than within the groups
  • High F is good in relation to Degrees of Freedom
  • P Value says if F is significant reject the Null
    * p < .001
50
Q

Omnibus Test

A
  • F Statistic is Omnibus Test
  • Tells you something significant is happening
  • Doesn’t tell you where the difference is
51
Q

Post-Hoc Tests

A
  • Tests carried out after finding significant overall ANOVA
  • Locates the source of the Significant F
  • Are Exploratory (cf Planned Comparisons)
  • Must be significant to Justify post-hoc test
52
Q

Follow up Significant ANOVA

A

Use post-hoc tests to control familywise error rate

53
Q

Boneferroni Adjustment a/n

A
  • Adjust significance level for each comparison
  • Alpha Divided by N - a/n
  • Acceptable p value is n
54
Q

Some Significant post-hoc tests ANOVA

A
  • Tukey’s Honestly Significant Difference Test (Tukey’s HSD)
    * Bonferroni’s test
  • Scheffe test
  • Newman-Keuls test
  • Fisher’s Least Significant Difference Test (Fisher’s LSD)
55
Q

Two most common post-hoc Tests

A
  • Tukey’s Honestly Significant Difference Test (Tukey’s HSD)
  • Bonferroni’s test
56
Q

Multiple Comparisons

A
  • Boneferroni Tests post-hoc test
  • Sig. column looking for significant differences
  • Significant at Less than 0.05
  • p < .05
  • In this example Rehydration/Carbohydrates are not significant
57
Q

Following up Significant ANOVA

A
  • Return to Descriptives Box
  • Interested in Standard Deviation and Mean values to tell the story of significance
58
Q

Calculate Effect Size

A
  • Psychology is moving away from focusing on the p value
  • Headed to confidence intervals or effect size
  • Effect size for one-way ANOVA is call eta-squared
  • Eta-squared is similar to r value - Correlation Coefficient
59
Q

Write up ANOVA in APA Format

A
  • A one-way between-subjects ANOVA revealed an overall significant difference in recovery rates between the four types of recovery, F(3, 96) = 56.20, p < .001, η2 = .64.
  • Post-hoc analyses using Bonferroni adjustment found that the combined intervention had significantly higher recovery rates compared to all other interventions (all p < .001).
  • The physio group had significantly lower recovery rates compared to the other interventions (all p < .001).
  • There was no significant difference between the rehydration (M = 73.04, SD = 7.29) and the carbohydrates group (M = 77.36, SD = 7.09), p = .174.
  • Means and standard error are presented in Figure 1.
60
Q

Write up ANOVA in APA Format- step Guide

A
  • Write what test you have done – one-way between-subjects ANOVA
    F Statistic two decimals
    p value three decimals give exact p value
    eta statistic (η2) two decimals
  • Write what post-hoc test you have done – Boneferroni adjustment
  • Report all p values, the significant ones, and the non-significant ones
  • Say direction of difference and the group means (higher or lower)
  • Present figure or table
61
Q

Assumptions to Meet - Between Groups Anova

A
  • Test for Normality
  • Observe Q-Q plots, Histograms
  • Skewness & Kurtosis
  • Homogeneity of Variance - Levenne’s
62
Q

Between-groups Design

A
  • Different participants expose to each level of the IV
  • Limit practice effects, carry over effects, fatigue and minimize attrition
  • Associated with issues of individual variability
  • More than two groups
63
Q

Repeated Measures

A
  • Within- Subjects
  • When each participant is exposed to all the treatments
64
Q

One-way ANOVA

A
  • Tells whether there are differences in mean scores on the DV across 3 or more groups
  • Invented by Sir Ronadl Fisher - F statistic
  • Post-hoc tests can be used to find out where the differences are
65
Q

Null Hypothesis

A
  • Usually denoted by letter H with subscript ‘0’
  • There is no significant difference between the means of various groups
66
Q

Alternative Hypothesis

A

At least one of the means is different from the rest

67
Q

Factor

A

The Independent Variable
One-way = single-factor = One independent variable
e.g. the type of treatment

68
Q

Between-Subjects

A
  • Independent groups
  • Each group is different to the other groups
  • e.g. comparing male and female & intersex
69
Q

Within Subjects Group

A
  • Dependent Groups
  • One group of participants exposed to all levels of Individual Variable
70
Q

Examine and Compare

A
  • Indicates there will be a t-test or an ANOVA
  • Two groups = t-test
  • 3 or more groups = ANOVA
71
Q

Familywise Error

A
  • The more t-tests we do the greater the risk of error
  • ANOVAs guard against familywise errors
72
Q

Repeated-measures ANOVA

10:49 Part 1

A
  • Can analyse differences between means from same group of participants
  • If overall F is significant then run post-hoc analyses
73
Q

Statistical Question

A
  • Is there a statistically significant difference among the averages of the means
  • Different treatments completed by the same group of subjects
74
Q

Benefits of Repeated Measures

A
  1. Sensitivity
  2. Economy
75
Q

Repeated Measure - Sensitivity

A
  • A source of error is removed
  • No individual differences when same subjects are in each group
  • By removing variance data becomes more powerful in identifying experimental effects
76
Q

Repeated Measure - Economy

A
  • Research often constrained by time and budget
  • Fewer subjects required to get the same data
77
Q

Problems with Repeated Measures

A
  1. Drop-out
  2. Practice/Order/Carry-over Effects
78
Q

Repeated Measures Drop Out

A
  • Participants may withdraw for many differnt reasons
  • If we miss even one score all data for that subject has to be removed
  • Researchers should state what the drop out rate is
79
Q

Repeated Measures Practice/Order/Carry-Over Effects

A
  • Receiving one type of treatment can make subsequent treatments easier
  • May cause varied performance
  • What happens at beginning might affect what happens at the end of research
  • We can use counterbalancing to get around this
80
Q

Assumptions - Tests for Sphericity

A
  • With t-tests and Between-subjects ANOVAs we look for homogeneity of variance
  • With paired samples t-tests and Within-subjects ANOVAs we want Differences to be equal
  • Mauchly’s Test for sphericity
  • Equality in variances of differences
81
Q

Mauchly’s Test for Sphericity

A
  • Equality in variances of differences
  • Tested using Mauchly’s Test
  • p < .05, assumption of sphericity has been violated
  • p > .05, assumption of sphericity has been met
82
Q

Looking at Dataset

A

Each row is a participant and each column is the IV Condition

83
Q

SPSS - Repeated Measures Within ANOVA

A

1. Analyse
2. General Linear Model
3. Repeated Measures

4. Type the factor (IV) name (Recovery_Methods) and number of levels (3)
5. Add
6. Define

84
Q

SPSS Within-subjects ANOVA

A
  1. Replace ? marks!
    * One at a time drag each group to Within-Subjects Variables window
    * Place them in order
    * Click EM Means
85
Q

EM Means

A

8. EM Means
* Drag IV (recovery method) into “Display Means For” Window(recovery Method)
9. Continue
10. OK

86
Q

Descriptive Statistics - Within Subjects Anova

A
  • Sample Sizes for each of the conditions
  • Standard Deviations
  • Means
87
Q

Tests for Sphericity

Part 4 - 5:46

A
  • Mauchly’s Test of Sphericity
  • don’t need df or others
  • Only state the test used and p-value
  • Say whether assumption was violated or not
  • p < .05 Assumption has been met
88
Q

Interpret Within-Subjects Effects

A
  • If spericity has been met we use first row
  • If sphericity is violated we can use Greenhous-Geisser or Huynh-Feldt Corrections
89
Q

Within-Subjects ANOVA Degrees of Freedom

A
  • k = the number of conditions (3)
  • N = the sample size (40)
    Formula = dfwithinfactor = k - 1 = 2
    dferror = (k-1)(N-1) = 2 x 39 x = 78
90
Q

Multivariate Test

A
  • This is to get around the Sphericity Assumption
  • Wilks Lambda
  • Non Parametric Test
  • is less powerful but is another option
91
Q

Post-Hoc Tests

A

1. Analyse
2. General Linear Model
3. Repeated Measures
4. EM Means

5. Tick compare main effects
6. Select Bonferroni under “CI adjustment”
7. Continue
8. Ok

92
Q

Pairwise Comparisons - Bonferroni

A
  • Inspect the p-values to see which treatments are significantly different
    *
93
Q

APA Write up for Within-subjects ANOVA

A
  • A one-way within-subjects ANOVA revealed that ther was an overall sidnificant difference between the ream recover rates of athletes across the three recovery methods, F(2, 78 = 12.38, p < .001. The assumpthion of sphericity was met, p = .75. Post-hoc analyses using Bonferroni adjustment showed the combined method resulted in significantly higher recovery rates than both the carbohydrates (p = .03) and the rehydration (p < .001) methods. There was however, no significant difference between the carbohydrates and rehydration techniques (p = .09)
94
Q

Write up APA Format if Sphericity is violated

A

Use Wilks Lambda instead and the df are different, and the F value is different