Stats Flashcards
(45 cards)
what are factorial designs
one dependent variable
two or more independent variables
an example of a two way factorial design
1 DV 2 IV eg DV - time taken to get to work IV - time of day IV - mode of transport
an example of a three way factorial design
1 DV 3 IV eg DV - proportion recognised IV - diagnosis IV - season IV - stimuli
factorial designs
why are more complex designs with 3 or more factors unusual
complicated to interpret
require large n (between-subjects)
take too long per participant (within-subjects)
when are factorial designs needed
more than one IV contributes to a DV
what do factorial designs tell us
allows us to explore complicated relationships between IVs and DVs
- main effects (how IVs individually affect the DV)
- interactions (how IVs combine to affect the DV)
interpreting factorial design results - main effects
most straight forward result
summaries the data at the level of individual IVs
marginal means
(try add picture from notes)
problem with main effects
can be misleading
main effects can give what we might assume to be the two optimal conditions but these two together may not be the optimal condition
interpreting factorial design results - interactions
we look at them in line charts no interaction = parallel lines interaction = crooked lines special case = crossover interactions -the effect of the IV on the DB reverses dependent on the other IV
what are the three types of factorial anova
between-subjects factorial anova
within-subjects factorial anova
mixed factorial anova (covered in PS2002)
assumptions in factorial anova
interval/ ratio (scale in SPSS)
normally distributed - examine with histogram
homogeneity of variance (for between-subjects) - eyeball SDs, Levene’s test
sphericity of covariance (for within-subjects) - mauchly’s test
what tests for homogeneity of variance
levene’s test
what tests for sphericity of covariance
mauchly’s test
what happens if assumptions are violated for factorial anova
they can withstand some violation
so proceed with caution and report what assumptions have been violated, along with corrected if possible anova results
F-values
how many of these values can there be
on-way factorial anova = 1 F-value
two-way factorial anova = 3 F-values (main effect a, main effect b, interaction axb)
three-way factorial anova = 7 F-values (main effect a, b, c, interactions axb, axc, bxc, axbxc)
how to report multiple F-values
F(between-groups df, within-groups or error df here)=F-value, p=probability
central tendency
s single score that represents the data - mean
dispersion / spread
a measure of validity in the data
s(tandard deviation)=sqrt(sum(x-mean)^2/(N-1))
using means and standard deviations
we can compare a range of measurements using z (standard) scores
z = (score - mean)/SD
we can express how many SD units a point in the normal curve is from the mean using z scores`
why do we use sampling
we cannot test everyone
we make assumptions about how our sample relates t the population based on what we know about sampling theory
what is a population
every single possible observation
fortunately we know populations tend to be normally distributed
explain the central limit theorem
if samples are representative of the popultaion
1 the distribution of all the sample means will approach a normal distribution
2 whilst individual sample means may deviate from the population mean, the mean of all sample means will equal the population mena
3 as the sample size increases, standard deviation of the sampling distribution decreases
as a sample size increases……
we can say with more certainty what the population mean is
standard error of the mean or standard error
SE=SD/sqrt(N)
represents the SD of the sampling distribution. this represents how confident we can be that our sample mean represents the population mean