Power And Effect Size Flashcards

1
Q

With F-ratios that exceed F-critical we…

A
  • reject the null hypothesis
  • independent variable(s) influence(s) the dependent variable.
  • Statistically significant effect.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

•When a finding does not exceed alpha level (p <0.05) we…

A
  • fail to reject the null hypothesis:
  • Ho=all means are equal implies no evidence of an effect of the treatment
  • No evidence of a statistical difference.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

“no statistical difference” does not…

A
  • prove the null hypothesis.
  • We simply do not have evidence to reject it.
  • A failure to find a significant effect does not necessarily mean the means are equal.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

So it is difficult to have confidence in the null hypothesis:

A

Perhaps an effect exists, but our data is too noisy to demonstrate it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Sometimes we will incorrectly fail to reject the null hypothesis –

A
  • a type II error.

* There really is an effect but we did not find it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Statistical power is the probability of…

A

detecting a real effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

power is given by:

A

1- 
where  is the probability of making a type II error
•In other words, it is the probability of not making a type II error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Power is your ability to find a …

A

difference when a real difference exists.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The power of a study is determined by three factors:

A
  • Alpha level.
  • Sample size.
  • Effect size=
  • Association between DV and IV
  • Separation of Means relative to error variance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Power and alpha

By making alpha less strict, we can…

A

•increase power.
(e.g. p < 0.05 instead of 0.01)

However, we increase the chance of a Type I error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Low N’s have very little…

A

Power

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Power saturates with many…

A

Subjects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Power and sample size

One of the most useful aspects of power analysis is the estimation of the

A

sample size required for a particular study
•Too small an effect size and an effect may be missed
•Too large an effect size too expensive a study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Different formulae/tables for calculating sample size are required according to

A

Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Power and effect size

•As the separation between two means increases the power…

A

Also increases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Power and effect size

As the variability about a mean decreases power …

A

Also increases

17
Q

Measures of effect size for ANOVA

A
•Measures of association=
Eta-squared (2)
R-squared (R2)
Omega-squared (2)
•Measures of difference=
d
f
18
Q

Eta squared is the proportion of the total variance that…

A

Is attributed to an effect

19
Q

ETA squared equation

A

n2 = SStreatment / SStotal

20
Q

Partial eta-squared is the

A

proportion of the effect + error variance that is attributable to the effect

21
Q

Partial eta-squared equation

A

N2p = SStreatment / SStreatment + SSerror

22
Q

Measures of association

ETA squared and partial ETA squared are both kinds of

A

Measures of association of the sample

23
Q

Measures of association- R squared

In general R2 is the proportion of…

A

variance explained by the model

  • Each anova can be thought of as a regression-like model in which each IV and interaction between Ivs can be thought of as a predictor variable
  • In general R2 is given by
24
Q

R squared equation

A

R2 = SSmodel / SStotal

25
Measures of association Omega squared is an estimate of the
dependent variable population variability accounted for by the independent variable.
26
Measures of difference -d When there are only two groups d is the...
standardised difference between the two groups
27
Measures of difference - f Cohen’s (1988) f for the one-way between groups analysis of variance can be calculated as follows
F= square root of w2 / 1-w2 It is an averaged standardised difference between the 3 or more levels of the IV (even though the above formula doesn’t look like that)
28
Measures of difference Cohens f Small Medium And later effects
Small effect - f=0.10; Medium effect - f=0.25; Large effect - f=0.40
29
What can simple power analysis program available on the web called GPower do?
This program can be used to calculate the sample size required for different effect sizes and specific levels of statistical power for a variety of different tests and designs.
30
There are two ways to decide what effect size is being aimed for:
* On the basis of previous research * Meta-Analysis: Reviewing the previous literature and calculating the previously observed effect size (in the same and/or similar situations) * On the basis of theoretical importance * Deciding whether a small, medium or large effect is required. The former strategy is preferable but the latter strategy may be the only available strategy.
31
Calculating f on the basis of previous research •This example is based on a study by Foa, Rothbaum, Riggs, and Murdock (1991, Journal of Counseling and Clinical Psychology).
* The subjects were 48 trauma victims who were randomly assigned to one of four groups. The four groups were * 1) Stress Inoculation Therapy (SIT) in which subjects were taught a variety of coping skills; * 2) Prolonged Exposure (PE) in which subjects went over the traumatic event in their mind repeatedly for seven sessions; * 3) Supportive Counseling (SC) which was a standard therapy control group * 4) a Waiting List (WL) control. * The dependent variable was PTSD Severity
32
What should we report?
* Practically any effect size measure is better than none particularly when there is a non-significant result * SPSS provides some measures of effect size (though not f) * Meta-analysis (e.g. the estimation of effect sizes over several trials) requires effect size measures * Calculating sample sizes for future studies requires effect size information
33
Things to be avoided if possible
* “Canned” effect sizes * The degree of measurement accuracy is ignored by using fixed estimates of effect size * Retrospective justification * Saying that a non-significant result means there is no effect because the power was high * Saying that there is a non-significant result because the statistical power was low
34
What are canned effect sizes?
The degree of measurement accuracy is ignored by using fixed estimates of effect size
35
What is retrospective judgement?
Saying that a non-significant result means there is no effect because the power was high •Saying that there is a non-significant result because the statistical power was low