stats final Flashcards
correlation levels of measurement
IV and DV are interval and ratio
pearson-product moment correlation
interval/ratio
normal distribution
r
strength of correlation
0: null
1.0: perfect pos
-1.0: perfect neg
assumptions for correlation
scores rep population
normal distribution
has both x and y
x and y are independent measures
x and y are observed
linear relationship
interpretation of correlation
< .25 little to no
.25-.50 low to fair
.50-.75 moderate to good
> .75 strong relationship
limitations of correlations
only two variables
only linear
does not tell cause and effect
does not account for agreement
influenced by range
average values can suppress variation
coefficient of determination
square of correlation coefficient
the percent of variance in y that is explained by x
significance of coefficient
very sensitive to sample size
conventional effect sizes for r
small: .10
medium: .30
large: .50
what are non parametric statistics based on?
comparisons of rank scores
comparisons of counts or signs of scores
when do you use non parametric tests?
when you violate more than 2 parametric assumptions
what are the advantages of non parametrics
appropriate for wide range of solutions
can use with categorical data
simple computations
outliers have less effect
disadvantages of non parametrics
they waste information - collapsed data
less power - 65-95% of para counterparts
if outliers are not errors, effects may be underestimated
non para for unpaired t test
Mann-Whitney U
non para for paired t test
sign test
~ scores converted to signs
wilcoxon signed ranks test (more common)
~ gives magnitude of change
non para for IG ANOVA
kruskal-wallis ANOVA
non para for RM ANOVA
freidmans ANOVA
how to rank ties
average what the two ranks would be
spearman rank (rho) correlation coefficient
non para analog of pearson r
at least one variable will be ordinal
non normal distribution of ratio/interval data
can be used with curvilinear
spearman value
since it is correlation -1 through +1
chi-square
association between two categorical variables
goodness of fit chi square
compare observed frequencies of 1 variable to uniform frequencies
tests of association chi square
much more common
compare observed frequencies of one variable to observed frequencies of another variable
assumptions for chi square
frequencies represent individual counts
can only be part of one category
no subject is represented twice - not for paired