Week 4 Flashcards

(13 cards)

1
Q

Statistical power

A
  • what is the probability that a study will detect an effect when there is an effect there to be detected
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Type 1 and Type 2 errors

A

Type 1 there is none but you’ve said there is

Type 2 - blind to the actual difference, you’ve said there is none

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Alpha

A
  • Alpha is the probability that we will reject the null hypothesis when we shouldn’t
  • That we say there is an effect when there isn’t one
  • This is our Type I Error!
    <5% (
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why not use a tiny alpha value

A
  • Because there is a relationship between alpha and beta
  • If we choose a tiny alpha value, we will make it difficult to reject the null hypothesis (Type II errors very common!)
  • If we choose a larger alpha value, Type II errors will be less common
  • Sensitivity vs Specificity
  • The risk of false positive (Type I) vs false negative (Type II)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Power

A
  • Statistical power is the likelihood that a study will detect an effect when there is an effect there to be detected
  • What is the probability of a correct decision of rejecting the null hypothesis when it is false
Power = 1 – Probability of a false negative (Type II error)
Power = 1 – β
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Factors affecting statistical power

A

Alpha level
Error variance
Sample size
Effect size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Sample size and statistical power

A
  • Sample size works in the same way that error variance works
  • As we test more people we are able to better describe a distribution
  • Our hypothetical distributions (based on our samples) gets smaller
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Effect size

A
  • Effect size is the relative distance between our null and true distributions
  • This distance is measured in standard deviation units
  • An effect size of 0 (zero) would mean no difference between groups (a “perfect” null result)
  • Effect size increases as two or more groups become “more” different from each other
  • This can help tell us if differences are practically meaningful
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Effects size measurements

A

Main Effect (ANOVA)

  • Eta Squared
  • Omega Squared

Multiple Comparisons (Planned contrast or Post-hoc)

  • r
  • Cohen’s d
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Eta squared

A
  • Used for main effect
  • Small (.01); Medium (.09); Large (.25)

𝜂^²= (𝑆𝑆)_𝐵𝑒𝑡𝑤𝑒𝑒𝑛 / (𝑆𝑆)_𝑇𝑜𝑡𝑎𝑙

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Omega squared

A
  • Used for main effect
  • Most accurate measure of effect size for main effect
  • Small (.01); Medium (.06); Large (.14)

𝜔^2= (𝑆𝑆_𝐵 − (𝑑𝑓_𝐵∗𝑀𝑆_(𝑊))) / (𝑆𝑆_𝑇 + 𝑀𝑆_𝑊 )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Effect size for planned contrasts

A
  • r
  • Used for follow-up tests
  • Particularly useful for planned contrasts
  • Small (.10); Medium (.30); Large (.50)

𝑟= √ (𝑡^2 / (𝑡^2+𝑑𝑓))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Effect size for post-hoc tests

A
  • Cohen’s d
  • Used for follow-up tests
  • Can be used for Tukey’s post-hoc tests
  • Small (.20); Medium (.50); Large (.80)

Step 1:
𝑆_𝑝𝑜𝑜𝑙𝑒𝑑 = √ (((𝑛_1−1) 𝑠_1^2 + (𝑛_2−1) 𝑠_2^2) / (𝑛_1+𝑛_2 ))

Step 2:
𝑑 = (𝑋̅_1 − 𝑋̅_2) / 𝑠_𝑝𝑜𝑜𝑙𝑒𝑑

How well did you know this?
1
Not at all
2
3
4
5
Perfectly