Type 1 / Type 2 Errors Flashcards

(29 cards)

1
Q

● What is a Type I error?

A

When the researcher wrongly accepts the alternative hypothesis and rejects the null hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

● What is a Type II error?

A

When the researcher wrongly accepts the null hypothesis and rejects the alternative hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

● Why do psychologists use a 5% significance level?

A

To balance the risk of making Type I and II errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

● What does p < 0.05 indicate?

A

Less than a 5% probability results are due to chance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

● What is meant by a lenient p value?

A

A higher probability value, e.g. p<0.10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

● What is meant by a stringent p value?

A

A lower probability value, e.g. p<0.01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

● How do you check for a Type I error?

A

Compare calculated value to a more stringent p value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

● How do you check for a Type II error?

A

Compare calculated value to a more lenient p value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

● What is another name for a Type I error?

A

False positive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

● What is another name for a Type II error?

A

False negative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

● What is the probability that something is due to chance with p<0.10?

A

Less than 10%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

● What is the probability that something is due to chance with p<0.01?

A

Less than 1%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

● What should you do if results are still significant at p<0.01?

A

More than 99% sure results are significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

● What happens if results are not significant at p<0.01?

A

There is a risk of a Type I error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

● What happens if results are significant at a more lenient p value?

A

There is a risk of a Type II error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

▲ Why is it important to use the critical value table for checking errors?

A

To see if significance changes with different p values

17
Q

▲ How does using a more stringent p value affect confidence?

A

Increases confidence if results are still significant and not due to chance

18
Q

▲ What is the risk if the results are not significant at a stringent level?

A

Possible Type I error

19
Q

▲ What is the risk if the results are significant at a more lenient level?

A

Possible Type II error

20
Q

▲ What should you state when writing up an error check?

A

Which error you checked for and your level of confidence

21
Q

▲ How does the choice of p value affect the risk of errors?

A

Lenient p increases Type I error, stringent p increases Type II error

22
Q

▲ When do you have more than 99% confidence in results?

A

When results are still significant at p<0.01

23
Q

▲ Why might a drugs trial use a p value of 0.01?

A

Because people’s lives are at risk

24
Q

▲ Why does p<0.05 strike a balance?

A

Balances risks of Type I and II errors

25
✪ Discuss what a Type I error.
A Type I error is when the researcher has used a lenient P value. The researcher thinks the results are significant when they are actually due to chance/error. So they wrongly accept ALTERNATIVE/EXPERIMENTAL hypothesis and wrongly reject the null.
26
✪ Discuss what a Type II error is.
A Type II error is when the researcher has used a stringent p value. They think that their results are not significant (due to chance/error) when they could be significant. So they wrongly accept the NULL hypothesis and wrongly reject the alternate/experimental.
27
✪ Explain the difference between Type I and II errors.
Type I is where a lenient P value is used whereas a Type II error is where a stringent P value is used.
28
✪ Explain how to check for a Type I error.
To check for a type 1 error: you need to compare the calculated/observed value to a critical value from a more stringent p value. If the results are still significant then it is NOT a type 1 error. If the results are now not significant, then there is a chance of a type 1 error.
29
✪ Explain how to check for a Type II error.
To check for a type 2 error: you need to compare the calculated/observed value to a critical value from a more lenient p value (p0.05). If the results are still not significant then there is not a type 2 error. If the results are NOW significant at the more lenient level then there is a risk that a type 2 error has been made.