Module 10, Errors in Hypothesis Testing & Statistical Power Flashcards

1
Q

Type 1 Error

A

the null hypothesis is true, but we reject it and conclude that an effect exists
- example: we conclude that there was a significant difference in the number of concussions between teams who practice tackling with helmets and those who do not BUT there is no difference between the two populations
* type I error is that we found wearing helmets there were more concussions which is not true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Probability of Making a Type I error / Not

A

probability of making a Type I error:
p(Type I error) = α (probability of reject the null hypothesis)

p(not making Type I error) = 1 – α
- 95% chance of not making a Type I error (if you set alpha to 5%, there is a 95% chance that our sample lines up with our population and 5% that we made an error)
- we do not actually know that we made a type I error, we have to be confident that there is a 95% of getting it right

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why is Type I Error a Concern?

A
  • saying that an effect exists when it does not communicates false information (the sample was not representative of the population so it leads you to a false conclusion)
  • imagine researchers say there was a positive effect of a drug but there was not one
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why does Type I Error Occur?

A

due to chance factors and fluctuations in sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Type II Error

A

has to do with the alternative hypothesis
the null hypothesis is false, but we fail to reject it and conclude that no effect exists
- example: we conclude that there was no significant difference in the number of concussions between teams who practice tackling with helmets and those who do not BUT there is an effect between the two populations
- alternative hypothesis is true in the population but the sample failed to detect it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Probability of Making a Type II Error / Not (Statistical Power

A

p(Type II error) = β
- β is difficult to determine, because we frequently don’t know true population values
- figuring out probabilities for alternative hypothesis is a bit more tricky to instead we take about it in terms of powers

probability of not making a Type II error (use this):
p(not making a Type II error) = 1 - β
- statistical power = 1 - β
statistical power is the ability to detect an effect when it does exist in the population
- beta itself is hard to determine so we use the inverse whereas for type I we just use alpha

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why is Type II Error a Concern?

A

it may result in the non-communication of correct information
- researcher believes no effect exists, they may be less likely to attempt to publish the research (also less likely to get the findings published)
- drug b is better than drug a but we are not picking up on that and patients are missing out on better treatment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why Does Type II Error Occur?

A
  • random or chance fluctuations
  • small effects that are difficult to detect (need more power to detect)
  • research design issues
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Controlling Type I Error (1)

A

make it more difficult to reject H0
- lower the value of α from .05 to .01 (make the probability lower)
◦ harder to statistics to fall in region of rejection as those regions get smaller and thus makes it harder to reject the null hypothesis
- BUT a more stringent α also increases the probability of making a Type II error
◦ the more difficult we make it to reject H0, the more likely we are to miss effects that actually exist

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Controlling Type II Error (5)

A

make it more reasonable to reject H0 (pick up on effect as much as we can)
1) increase sample size (the more likely it is representative of the population)
- larger sample size yields lower critical values
◦ however some significant results become trivial

2) raise α
- however, this increases the probability of Type I error (do not really use this)

3) use a directional alternative hypothesis (H1)
- a directional hypothesis examines statistics in only one tail of the distribution
◦ hereby consolidating α (region of rejection) in one region
- not best practice as it looks like you did not find a statistically significant finding in two-tailed and so you moved on

4) increase between-group variability
- maximize the differences (effect size) between groups by using effective experimental manipulations
- effects that are different enough so you can actually pick up on it or else you would need a lot more power
- it effects the numerator because the means are more different from another making the t stat bigger and hopefully fall in region of rejection

5) decrease within-group variability
- minimize the error variance (variance that cannot be explained or accounted for) within a group by increasing the sample size and standardizing testing procedures
- reduce noises and effects within the group and hope that the group is the same in their variability
- effects the denominator and makes it smaller

How well did you know this?
1
Not at all
2
3
4
5
Perfectly