chapter 11 Flashcards
(44 cards)
what is the problem in analyzing experimental data
If the independent variable has an effect on the dependent variable, the means for the experimental conditions should differ
Total variance = systematic variance + error variance
However, error variance can also cause the means to differ
- So, the condition means could be different even off the independent variable had no effect
Then, how do we know whether the difference among means is caused by the independent variable or by error variance?
sources of error variance
individual differences
transient states
environmental factors
differential treatment
measurement error
individual differences
pre-existing differences between people; this is the most common source of error variance
transient states
at the time of the experiment, participants differ in how they feel (e.g., mood, health, fatigue, interest)
environmental factors
differences in the environment in which the study is conducted (e.g., noise, time of day, temperature)
differential treatment
despite their best efforts, experimenters do not always treat all participants exactly the same
measurement error
unreliable measures increase error variance
3 solutions to understand whether the difference is meaningful
significance testing
effect size
confidence intervals
significance testing
the probability that the difference between the groups is due to error variance
Estimate how much the means should differ if the independent variable has no effect
If the observed mean difference exceeds this amount, then the independent variable may be having an effect
We cannot be certain that the difference was caused by the independent variable, but we can know the probability that the independent variable caused the means to differ
effect size
the size of the difference between the groups is noteworthy or not
confidence intervals
the difference between the groups relative to the precision of the data
null hypothesis statistical testing
used to determine whether differences between the means of the experimental conditions are greater than expected on the basis of error variance alone
null hypothesis
independent variable did not have an effect on the dependent variable
experimental hypothesis
the independent variable did have an effect on the dependent variable
Although we are really interested in the experimental hypothesis, inferential statistics test the null hypothesis
problems with null hypothesis
p-hacking
information it provides is not as precise and informative as other approaches
p-hacking
p-value fishing
when researchers over analyze their data in search of significance (needed for publication)
Performing many unplanned analyzes increases the likelihood of finding effects of the basis of type I error alone
p-value
the probability that the obtained difference between the condition means is due to error variance
Ranges from .0 to 1.
The closer you get to 1, the more likely that the difference between the means is exactly what one could expect based on the amount of error variance
- A p-value of 0.05 means the probability of getting our difference on the basis of error variance is 0.05 or 5%
- A p-value of 0.01 means that the probability of getting our difference on the basis of error variance is .01 or 1 in a 100
- A p-value of 0.001 means the probability of getting our difference on the basis of error variance is only 1 in 1000
Rejecting the null hypothesis
researcher concludes that the null hypothesis is wrong and that the independent variable did have an effect (there is a group difference!)
Failing to reject the null hypothesis
researcher concludes that the null hypothesis is correct and that the independent variable did not have an effect (there is no group difference!)
We cannot say that we “accept” the null hypothesis, because the null hypothesis can never be proven as there may be several other explanations
Statistically significant
has low probability of occurring as a result of error variance alone; when we reject the null hypothesis with a low probability of making a Type I error - the difference
type I error
a researcher rejects the null hypothesis when it is true (i.e., showing sig finding when in fact there was no real difference between groups)
alpha
false positive
example:
- finding a drug is effective when it is in fact is not
- sending an innocent person to jail
alpha
the probability of making a Type I error (and erroneously believing that an effect was due to IV when it was actually due to error variance)
false positive
test shows positive when in fact there is no virus
type II error
a researcher fails to reject the null hypothesis when it is false
beta
false negative
example:
- failing to find that a drug works when in fact it does
- failing to send a guilty person to jail