Flashcards in 9. Repeated Measures Designs Deck (94):
what is the process of research design and data analysis?
review previous research
operationalise IV and DV
choose appropriate design
Determine sample size for adequate power
analyze data and report findings
what are the options for an experimental design when choosing what design would be the most appropriate?
what are things to consdering when designing an experiment
nature of the IV
expense of project or availability of participants
when considering the control of order effects, what happens when we cant control them?
then you will have to use an independent groups design
what is the definition of a repeated measures design
all participants contribute a score at each level of the IV
what is repeated measure design also known as?
dependent groups or within groups design
wht are the two categories of levels of IV related to time?
with intervention (pre and post therapy)
natural change (changes in cognitive ability in children over time)
what are levels of IV not related to time
IV is exposure to categorical elements (e.g. light intensity)
what are the advantages of RM designs?
economy of participants
sensitivity is enhanced by separating individual differences from experimental error
what are the disadvantages of RM designs?
cant use with all IVs (e.g. ethnicity)
order effects (practice, fatigue, carrover)
define precision matched
where each participant is directly matched with others in the other levels of the IV
what are common issues with RM designs?
what is maturation
changes naturally occurring with time eg. learining
what is history?
uncontrolled event occurres between testing conditions
what is attrition or mortality?
participants drop out of study
what are common order effects?
what are practice effects
performance at one level improves to the next
what are fatigue effects
performance declines on repeated testing
what are carry-over effects?
one level of IV affects another level
what are remedies of practice and fatigue effects?
can be controlled by counter balancing or randomisatin and prior exposure
what is counterbalancing or randomising?
randomisation or the oder to treatments across participants
what is prior exposure
prior exposire to measurement before exposure to experimental condition may reduce practice effects
how does one control a carryover effect?
can rarely be controlled but you can someone help prevent this by a long delay between testing each level of the IV. but you should use a BG design if you suspect that they will operate with the IV you are using
what does counterbalancing aim to do?
seek to diminish the effect of order effects
what is the process of randomisation?
each participant gets exposed to each level of IV random;y
what is the process of counterbalancing?
each conditions appears in a given order an equal number of times
what do we compare in independent groups analyses?
we compare groups to each other
what contributes to error in independent groups analyses?
what does RM analyses allow for control of?
individual differences that can contribute to error
why do RM analyses control individual differences?
because we compare each participanross conditions
what does RM analyses statistically do to reduce error?
removes variability due to individual differences
what does RM analyses allow further partioning of?
SS_total (index of variability)
how does SS_total (total variability) partition?
total variability partitions into BG variability (SS_between OR SS_A) and WG variability (SS_within)
how does WG variability (SS_within) partition firther?
WG variability (SS_within) partitions to:
Participant variability (SS_participant or subject)
Error variability (SS_error or residual OR SS_AxS)
what is N?
the number of scores (not participants)
what is n?
number of participants
what is the equation for df_total?
N-1 or an-1
what is the equation for df_between (or df_treatment or df_A)?
what is the equation for df_within
what is the equation for df_participants
what is the equation for df_error)
how does one calculate subjects variability (SS_subjects or participants)
aΣ(M_s - GM)^2
(participant mean - grand mean)
where a=number of conditions
how does one calculate treatment (or between) variability (SS_between)
condition mean - grand mean
n(M_j - GM)^2
where n = number of participants
how does one calculate total variability (SS_total)?
each score - gran mean
Σ(X_ij = GM)^2
compare every single score obtained to the grand mean
how does one calculate error variability (AKA SS_residual or SS_AxS)?
SS_error = SS_total - (SS_between + SS_subjects)
what does 'a'represent here?
number of conditions
what is SS_total?
SS_total = SS_between + SS_participants/subjects + SS_error
what is the equation of MS?
what is the equation of F for repeated measures?
F = MS_Between / MS_error
what does a high F ratio mean with regard to the p value?
high f ration means low P value
when will a F ratio be larger with regard to repeated measures design and individual groups design?
f ratio will always be larger in a RM design
why will the error term of a RM design be smaller?
because we are removing all the variability due to individual differences from the error term
what are the assumptions of RM analyses?
what is the assumption of normality in RM designs?
is required as in the IG case
what is the assumption of independence>
it is not a problem because although the scores are not independent in a RM design due to the fact that the same participants participate in each condition, these participant effects have been eliminated
which assumption is specific to RM designs?
what is sphericity?
refers to homogeneity across conditions and participants, so homogeneity of the variance and co-variance matrix
for within-subjects factors with more than 2 levels, what can conditions of the sphericity assumption cause?
serious inflation of type 1 error
why is sphericity often breached?
because it is a very restrictive assumption
what are the two ways to deal with the restrictiveness of the sphericity assumption?
the traditional method and multivariate method
what is the traditional test of sphericity that SPSS uses?
Maulchley's sphericity test
what does it mean when Maulchley's sphericity test is significant?
the sphericity assumption is breached
when is Maulchley's sphericity test significant?
when p is LESS THAN .05
why cant we use the normal F distribution any more with sphericity?
because it assumes that we have already met the sphericity assumption
what is the process of correcting breaches of sphericity?
adjust df in line with magnitude of the breach of sphericity to account for type 1 error
if sphericity breached, use these adjusted df to test F ratio
what are epsilon values?
different formulas for adjusting our df to compensate for breaches of sphericity
what is the range for epsilon values?
1 to 0
where 1 is perfect
0 is extreme violation of sphericity
what do epsilon values do?
adjust df (reduce them) based on the severity of the violation
what are the epsilon value adjusted df used for
used to find the critical value of F to which the F_observed is compared
what happens when you use the epsilon values adjusted df to find the F_critical and compare it to the F_observed>
results in larger critical value & more conservative test
if Maulchly's test of sphericity is significant...
the assumption has been breached
what does it mean if the significant value is above .05?
there is no significant violation of sphericity
what do the epsilon values indicate?
how bady the sphericity assumption has been breached
which is the most commonly used epsilon figure?
greenhouse geisser correction
on an SPSS output, which rows do we have to look at?
either greenhouse-geisser for both the IV and error
what row on SPSS output would you look at if you think there is no breach in sphericity?
sphericity assumed for both IV and error
if sphericity is violated but F isnt significant, what do you do to the H0?
if sphericity is violated by F is significant what do you do to the H0?
apply epsilon correction
if the f is significant for the epsilon corrected df what do you do to the H0?
if the f is not significant for the epsilon corrected df what do you do to the H0?
what is the multivariate approach to RM ANOVA?
extends the difference scores analysis we used in RM t-tests to within-subjects factors with 3 or more levels
what is a RM t-test based on?
what happens when we analyse difference scores?
we remove or "partial out" the consistency in scores for each person from one level of the IV to another
what is the multivariate approach to RM based on?
analysis of difference scores?
why does the analysis become more complicated when there are more than two conditions>
becuase there will be multiple difference scores
what does the multivariate approach do?
it treats each set of differences scores as separate dependant variables
what is the multivariate approach using to analyse?
separate error terms for each pair of conditions rather than a pooled error term, which means we dont have to worry about sphericity
what are identical when there are only 2 levels of the IV?
the multivariate approach and traditional approach
how does power and effect size theoretically differ with RM ANOVA and between groups ANOVA?
they dont, they are identical
what is the difference between the error term in RM ANOVA and BG ANOVA>
the error term is smaller for RM ANOVA
Because MS_residual.error is smaller than MS_within as variance is due to individual differences is partitioned out
why is power greater in RM design than in BG design even though they have the same effect size?
because in RM design, MS_residual is used which paritions out the issue of individual difference this higher power
how do we do post hoc and planned comparisons on RM designs?
we can do these using the dependent samples t-test procedure and use a bonferoni adjustment to maintain a good type 1 error rate
when looking at the multivariate approach output, what value is usually used?