Week 4 - Lecture 4 Flashcards
what is statistical power?
The probability that a test will detect an effect if one exists (ie. correctly rejecting a false null hypothesis)
- formula: power = 1 - β (Type II error rate)
- power increases the chances of detecting a true effect
what does the null hypothesis (H0) state in a study?
That there is no effect or no difference
what does the alternative hypothesis (H1) state?
That there is an effect or a difference exists
What is a Type I error?
Saying there is an effect when there isn’t one (false positive)
What is the symbol used to represent a Type I error?
Alpha (α)
What is the common value set for alpha (α) in research?
0.05 (or 5% chance of making a Type I error)
Give an example of a Type I error
A pregnancy test says someone **is pregnant **when they are **not **
What is a Type II error?
Saying there is no effect when there actually is one (false negative)
What is the symbol for a Type II error?
Beta (β)
Give an example of a Type II error
A pregancy test says someone is not pregnant when they actually **are **
What does **power **refer to in significance testing?
The ability to correctly detect an effect when it exists (1 - β).
What happens to power when beta increases?
power decreases (you’re more likely to miss real effects)
In terms of errors, why do we not always reduce alpha to 0.01?
Because it would increase beta and make us more likely to miss real effects (Type II errors)
What does it mean if power = 0.80?
There is an 80% chance of correctly finding a real effect if it exists
complete the table:
refer to photo
Does statistical significance always mean the result is practically important?
No. A result can be statistically significant but still be** too small to matter in real life**
How does a large sample size affect statistical significance?
It can make tiny effects appear **statistically significant **
What can happen with a small sample size?
It might miss real effects because there’s not enough data
What is an example of a study with a statistically significant but practically unimportant result?
Facebook’s 2014 study - effect size was d = 0.02, which is very small.
What happens to power if you lower alpha (eg. from 0.05 to 0.01)?
Power decreases (but you reduce the chance of a Type I error).
What happens to power if you increase alpha (eg. from 0.05 to 0.10)?
power increases (but so does the chance of a Type I error)
What does “effect size” tell us?
How big the difference or relationship is, measured in standard deviation units
What are Cohen’s effect size benchmarks for small, medium and large effects?
- small = 0.2
- medium = 0.5
- large = 0.8
How does sample size affect power?
larger sample size gives more power by reducing standard error