L9 - One-Way ANOVA Assumptions and Designs, Error Rate Control Flashcards

1
Q

What do the Brown-Forsyth and Welch tests do?

A

Brown-Forsyth and Welch tests are used when homogenity of variances is not met in a one-way between-subjects ANOVA.

They adjust the observed F value and degrees of freedom to make the actual false rejection rate closer to the nominal rate set by alpha.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are three statistical assumptions for one-way between subjects ANOVA

A
  • homogeneity of variances
  • independent observation
  • balanced design
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What should be done if there is a discrepancy between inferences from Brown-Forsyth and Welch tests?

A

If there is a discrepancy between inferences using F* and W, when sample size differs AND variances are heterogenous, we should not reject the omnibus null hypothesis, if it is being used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What happens if there is a violation of homogeneity of variances, but sample sizes are the same?

A

The homogeneity of group variances assumption operates in a very similar way to that in the independent two-group design – if the sample size of each group is the same, or are almost the same, then inferences for both the omnibus test and planned comparisons are robust to mild-to-moderate violation of this assumption.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

If there is evidence that there is a violation of homogeneity of variance, what will the outcome be for the one-way design if it is a) balanced and b) unbalanced?

A

➢ Balanced: The Fcontrast/tcontrast tests for planned comparisons will generally still be robust when using MSwithin.
➢ Unbalanced: The use of MSwithin in testing planned comparisons will not be robust. –> need to use ROBUST TEST STATISTICS, using separate variances for the standard errors in planned comparison.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an important assumption for within-subjects ANOVA using repeated measures?

A

the INTERVAL between TWO ADJACENT MEASUREMENTS must be the same for all people

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the SS within subjects further decomposed into?

A

SS within subjects = SS between occasions + SS individual x occasion interaction

occasion = variable/time point measurements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the df for SS individual x occasion interaction?

A

(n-1)(k-1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the df for SS occasions?

A

k-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the df for SS within ?

A

N-n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the df for SS between?

A

n-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do we calculate the observed F statistic in within subjects ANOVA?

A

F = MS occ / MS error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the two options for testing a null hypothesis using within-subjects ANOVA?

A

➢ The multivariate test – which does not assume sphericity of the covariance matrix among the dependent measures (this will not be approached in this subject); or
➢ The univariate test – which does assume sphericity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What kind of within subjects ANOVA does not assume sphericity?

A

Multivariate test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is sphericity?

A

Recall that the two important features in the dependent samples t test were:
➢ The use of difference scores between the set of paired measurements of each participant; and
➢ The strength of the correlation between the two sets of scores for reducing the size of the standard error.

The assumption of sphericity requires at a population level that:
➢ The variance of all difference scores between any two levels are the same; and
➢ The covariance between all sets of differences scores to also be the same.

A sufficient (but not necessary) condition for sphericity to hold is that the covariance matrix of the observed scores on all levels of the factor to show compound symmetry.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is compound symmetry?

A

A covariance matrix demonstrates compound symmetry if all variances along the diagonal of the matrix are the same value and if all off-diagonal covariances are also the same value (which implies that all correlations among the variables are equal).

17
Q

What is Mauchly’s test of sphericity?

A

➢ The null hypothesis for Mauchly’s test is that the covariance matrix demonstrates sphericity, therefore we do not want evidence to reject this null hypothesis (i.e. a small obtained p value is undesirable).
➢ The test itself can be affected by mild departures from normality in the DV scores, and therefore a significant result can reflect this occurrence rather than a departure of sphericity.

18
Q

What is epsilon?

A

Epsilon is a sample statistic measuring the degree of departure from sphericity.

If sphericity is present, population epsilon equals 1.
Its lowest possible value is 1/(k – 1); i.e. the lower-bound value in the third estimate provided – it can be ignored.

The Greenhouse-Geiser estimate is a more conservative estimate of the population epsilon that controls ER≤α.

The Huynh-Feldt estimate is more liberal.
If the Huynh-Feldt equals 1, then the obtained p value for Mauchly’s test will be >.05

Note: If in doubt, use Greenhouse-Geisser, as it is the middle estimate.

19
Q

Why is it important to run epsilon test?

A

The relevance of these estimates of epsilon is that violation of the assumption of sphericity makes the obtained F test no longer robust for the omnibus null hypothesis in within-subjects ANOVA and for planned comparisons—its becomes very liberal in these circumstances.

The epsilon estimates provide a way to adjust the degrees of freedom for the F test so that the false reject error rate is maintained close to alpha when the omnibus null hypothesis is true.

20
Q

How do we correct the degrees of freedom using epsilon?

A

By multiplying both degrees of freedom for the omnibus f test by the relevant epsilon estimate.

21
Q

What are orthogonal polynomial planned comparisons?

A

These are a wet of planned contrast weights used in one way between subjects ANOVA, to see what kind of change might be occurring across time.

is it linear? quadratic? cubic? quartic? etc.

can have k-2 planned orthogonal comparison weights.

–>set of orthogonal polynomial contrast weights completely account for all possible change b/c theyre ORTHOG.

–> prepackaged by SPSS, don’t need to work them out.

22
Q

How can we visually identify what ‘order’ polynomials may fit the data well?

A

by counting the amounts of ‘bends’ in the graph.

3 bends –> cubic

23
Q

What are the two different kinds of standardised mean contrasts for within-subject designs?

A
  • Standardised group differences
  • Standardised individual change

XECI will calculate point and interval estimates for group difference, but only point estimate for individ. change.

USE BONNETS DELTA or STANDARDISED D for individ change, and hedges’ g for group diff!

24
Q

What’s the difference between alpha value and false rejection error rate?

A
  • ALPHA is set BEFORE undertaking the null hyp test

- FALSE REJECTION ERROR RATE occurs AFTER deciding to reject the null hyp

25
Q

What is alpha in notational form?

A

Pr ( Rejecting H0 | H0 is true!!!!)

26
Q

What is the Per comparison alpha value?

A

The probability of rejecting a null hypothesis, given that it is true, for each planned comparison.

This sets an upper bound to making false rejection errors of either kind OVER THE LONG RUN when using 2 or more planned comparisons.

27
Q

What is the family-wise false rejection error rate?

A

False rejection error rate amongst j planned comparisons over the long run.

ER fw = 1 - ( 1- ERpc)^j

28
Q

Under what circumstances is ERfw expected to be larger than ERpc?

A

ERfw is much higher than ERpc when many null hypothesis tests are undertaken

when 100 tests are being undertaken, the probability is almost 1.., when the null is true

29
Q

Why is using the explicit calculation for ER fw undesirable?

A

The comparisons must be independent, but in practice, comparisons are not independent.

30
Q

When are comparison’s not independent?

A

when. .
- weights for the two contrasts are NON ORTHOGONAL
- the SAME MSWithin value is being used to calculate their respective standard errors..

same MS within value is often used in practice, due to the homogeneity of variances assumption.

31
Q

How should ERfw be calculated when two or more comparisons are not independent?

A

1-(1-alpha pc)^j

32
Q

What is the bonferroni correction method?

A

A method for defining an upper bound on the family wise false rejection error rate when undertaking multiple planned comparisons.

Implies that the ER fw will be no larger than the sum of all the per comparison error rates.

alpha pc = 0.05/number of planned comparisons

33
Q

What are the implications of violating the assumption of sphericity for planned comparisons in a
within-subjects ANOVA?

A

The implications of violating the assumption of sphericity for planned comparisons in a withinsubjects
ANOVA is that the statistical testing associated with the planned comparisons are likely
to be false rejection rates higher than the nominal levels set by the defined α for each
comparison. This will mean that some contrast tests will be rejected more often when the null
hypothesis is true than what the rate is defined to be by the α value set for each comparison.