Lecture 6 Flashcards

(14 cards)

1
Q

Two more issues need to be addressed before we ‘turn the page’ and move on to experimental designs:

A

Statistical power

Treatment of missing values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

white swans/black swans example: how many observations in different places do we make?

Would you trust me if I said “I checked out one lake and one pond, and I only found 3 white swans”?

Or if I said “I visited 27 lakes and saw 54 swans, but the light was poor for about a third of these visits and I couldn’t really see colour very well”?

A

(low power)

missing data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Statistical Power

A

Power is the probability of not making a Type II error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Four outcomes of hypothesis testing:

Ho is true/do not reject Ho =
Ho is true/reject Ho =
Ho is false/do not reject Ho =
Ho is false/reject Ho =

We can correctly or incorrectly accept or reject the null hypothesis (Ho).

What affects the accuracy of our decision-making?

A

correct decision
incorrect decision - type I error (alpha)
incorrect decision - type II error (beta)
correct decision

Statistical power—the more statistical power in the design, the better our decision-making (i.e., more power reduces the probability of committing a Type II error)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Values of Power
statistical power is the probability of…

Values of beta run from 1 (perfect ability to avoid this error) to 0 (totally wrong all of the time).

A

NOT committing a Type II error (called beta).

want power to be on the high side (around .80).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Not rejecting Ho when it is false:

Ho is false =
Do not reject Ho =

The error we are making here is to NOT reject the null hypothesis when in the real world it is actually false.

A

in the world there is a difference between these two means (i.e., tested this difference 100 times > find a statistically significant difference almost all of the time).
this test is one of the few times it wasn’t different

do a statistical test, e.g., a t-test, and find p-value is greater than .05 > no difference in means (i.e., accept the null hypothesis)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

EXAMPLE:
sample of 564 emerging adults
do whether females report more stress than males?

Means: 2.1335 (female) – 1.9159 (male) = 0.2076

p-value = .002

Observed Power = .861

Reduce Sample Size (N = 54)
Means: 2.297 (female) – 1.920 (male) = 0.377
p-value = .051
Observed Power = .502

Reduce Sample Size (N = 24)
p-value = .286
Observed Power = .182

What is the pattern developing here?

Conclusion:
Corollary:

A

absolute size of the difference is not very large.

p-value is less than .05 > reject the null hypothesis. This finding is consistent with a statistically significant mean group difference for stress: females > males.

analysis performed with good power.

difference is non-significant, power is lower

With smaller sample sizes, power is reduced, and the same mean group difference is no longer statistically significant.

larger samples confer greater statistical power, enabling one to identify mean group differences more reliably.
findings obtained with small samples are not reliable (i.e., unlikely to be replicated).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Power is…
A Type II error is…

Translation: in the world, there is a…
so mistakenly do…
The reason for this failure to reject the null hypothesis is that the…
When tested it with…

A

the probability of not making a Type II error
“not rejecting the null hypothesis when it is actually false”

true difference but the statistical test yielded a p-value greater than .05 (p = .29)
NOT reject the null hypothesis.
test was ‘underpowered’ (sample size too small).
an ample sample size, readily found that p < .05.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Take-home message about power:

Larger samples confer…

A

greater statistical power for one’s tests. Therefore, avoid samples that are too small.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Missing Values:

  1. Ignore the missing values and…
  2. Impute the missing values with…
A

allow SPSS to perform listwise or pairwise deletions

a proper method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Listwise Deletion:

an analysis drops only those participants that have a…

A

missing value for a variable involved in the analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Pairwise Deletion:
If you conduct correlations on a variety of variables that are missing different values, this is what you get below. Notice the different Ns. This is bad.

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
SPSS
Click on...
Select...
Type... 
SPSS will generate a...
A

Missing Value Analysis
variables and EM
50 iterations
whole new dataset which has all of the imputed values (except for gender).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Why is imputation good?
increases the number of…

Why is more power good?
decreases the chance that…

statistical power and imputing missing values are…

A

participants who have complete data, so your sample size reaches its maximum: it increases power!

you’ll mistakenly accept the null hypothesis.

valuable because they maximise the value of your statistical analyses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly