Lecture 7 Flashcards
(15 cards)
1
Q
type I error
A
- percentage of the null distribution that falls beyond the critical cut off
- 5% of the time (or whatever your α is) when H0 is true.
2
Q
power
A
- When H0 is false, power is the ability to detect a significant outcome.
- essentially the percentage of the
alternative distribution that falls beyond the critical cut off - want a minimum of 80%
3
Q
types of power
A
a priori
post hoc
4
Q
a priori power
A
- calculated before collecting data.
- typically involves figuring out the
expected effect size and sample size for 80% power.
5
Q
post hoc power
A
- calculated after the statistical result, usually if it is not significant.
- It involves figuring out how likely we were to reject H0 if it
were actually false. - From here, we usually assess if our sample size was
adequate. - usually not evaluated if the results are significant as, by default, we can consider it to be equal to 1
6
Q
ways to increase power
A
- decrease spread
- increase group differences
- increase sample size
- increase alpha (cheating)
7
Q
parametric tests
A
tests whose values can
be generalized to the population.
7
Q
why can we use effect size to determine power
A
- power is a 3 way relationship between magnitude of the difference between your means, the noise (variance), and sample size
- so, can use effect size because it contains 2/3 elements for power (magnitude of the difference between means, and noise/variance)
8
Q
speed-accuracy tradeoff
A
that the faster one does
on a task, the less accurate they tend to be
9
Q
what to do when an assumption if violated?
A
- transformation, which modifies the
data to try and fit the assumption. - controversy bc you are technically changing your data (like squaring or square rooting it)
- applying corrections to degree of freedom is best and easiest method
- can also artificially increase sample size (jackknifing & bootstrapping)
- use nonparametric tests (as they don’t rely on parametric assumptions)
10
Q
boostrapping
A
repeatedly subsampling from your data (with replacement) and
creating a new, artificially enlarged, sample size (e.g., 10,000 samples of n – 1)
11
Q
jackknifing
A
involves increasing the sample size by taking all combinations of n – m
samples (usually m = 1)
12
Q
nonparametric tests
A
- Values from these tests are not expected to generalize to population parameters.
- That does not mean that
the results or inferences cannot generalize - These tests rely on non-continuous data, ordinal or nominal. Continuous data that violates an assumption
is transformed to non-continuous data
13
Q
Mann-Whitney-U
A
- equivalent of the independent samples t test, using ordinal-rankings
- Where continuous data is no
longer valid/generalizable, it transforms data into categorical ranks - This removes effects of outliers, nonhomogeneous variances, issues
of normality, and other issues that can affect the spread of the data
14
Q
Wilcoxon Signed Rank
A
- equivalent of paired-sample t test, using ordinal-rankings
- works very similar to
MWU. It involves ranking difference scores