Lecture 9 Flashcards

Stat Significance, Standard Error, Effect Size, CIs (45 cards)

1
Q

Three common tools/indices to find ‘meaningfulness’ of statistical analysis?

A
  • Statistical Significance (p-value)
  • Confidence Intervals (95% or 99%)
  • Magnitude of the effect (effect size)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Tools/indices for meaningfulness also tell us:

A

How well SAMPLE statistics generalize to the larger TARGET population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Inferential stats

A

Making inferences about larger population based on sample–we should see the same results with a different sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Statistical significance definition

A

The probability that a statistic from the sample represents a genuine phenomenon in the population–what we see in the sample we should see in the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Statistical significance elements

A
  • Null Hypothesis Significance Testing
  • Systematic and Unsystematic Variation
  • Comparing signal to noise
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Null Hypothesis Significance Testing

A

We test null hypotheses… they are simpler. (H0) The question of interest is simplified into two competing claims (or hypotheses) between which we have a choice (between the null hypothesis and the alternative hypothesis). Special consideration is given to the null hypothesis. (e.g., H0 : there is no difference in symptoms for those receiving the new drug (Tx) compared to the current drug)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Systematic Variation

A

Variation that is explained by the model (SIGNAL)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Unsystematic Variation

A

Variation that cannot be explained by the model (NOISE)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Comparing Signal:Noise

A

We want the Effect(signal)>Error(noise)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the two possible conclusions of a hypothesis test with regard to the null hypothesis?

A

Reject

Fail to reject

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the four possible outcomes of a hypothesis test?

A
  • REJECT NULL THAT IS TRUE (Type 1 error, Incorrect Decision)
  • REJECT NULL THAT IS FALSE (Correct Decision)

*ACCEPT NULL THAT IS TRUE
(Correct Decision)

*ACCEPT NULL THAT IS FALSE
(Type 2 error, Incorrect Decision)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Null Hypothesis

A
  • H-0
  • “Simpler” and given priority over a more “complicated” theory
  • We either REJECT or FAIL TO REJECT
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Alternative Hypothesis

A
  • H-1

* A statement of what a statistical hypothesis test is set up to establish

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Type I error

A

Rejecting the Null when it is true

  • *Group differences were found when no actual differences exist
  • *Alpha
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which is the more serious error: Type I or Type II?

A

A Type I error is more serious and therefore more important to avoid.
*The test procedure is therefore adjusted so there is a guaranteed “low” probability of making a Type I error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The probability of a Type I error can be precisely computed:

A

Probability of a Type I error = alpha = p-value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

The Probability Value (p-value)

A

The probability of getting a value of the test statistic as extreme as or more extreme than that observed by chance alone, if the null is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What do small p-values suggest?

A

The null is unlikely to be true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Common values for significance

A

.05, .01, or .001

20
Q

What happens when you decrease the chance of a Type I error?

A

The chances of a Type II error increase!

21
Q

Type II error

A

Accepting the null when it is false

  • *No group differences were found when group differences do actually exist
  • *Beta
22
Q

What is a Type II error frequently due to?

A

Sample size being too small
**If we accept the null, it may still be false as the sample might not be big enough to detect the differences between groups

23
Q

What is the exact probability of a Type II error?

A

We don’t know!

24
Q

Power

A

The probability of correctly rejecting a false null

  • *finding an effect when it exists
  • *In other words, the probability of NOT committing a Type II error
  • *aka: 1-beta
25
Max and Min values of Power
Max: 1 Min: 0
26
Reasons for low power
* Sample sizes too small | * Use of unreliable measures
27
What is the Power cutoff social scientists often use?
.80; there should be at least an 80% chance of NOT making a Type II error
28
Why is the Power more lenient than the .05 level used in significance testing?
Because greater care should be taken in asserting the relationship exists, rather than in failing to conclude that a relationship exists
29
One-Tailed Test of Significance
Researcher has ruled out interest in one of the directions, and the test is the probability of getting a result as strong/stronger only in ONE direction
30
Two-Tailed Test of Significance
Tests the probability of getting a result as strong/stronger than the observed result in either direction
31
Sampling Distribution
A theoretical distribution of a sample statistic, used as a model of what would happen if the experiment was repeated infinitely.
32
Standard Error of Measurement (SE)
* The standard deviation of the sampling distribution of a given statistic * AKA: The measure of how much RANDOM variation between observed scores and expected scores
33
Standard Error of the Mean
The average difference between the population mean and the individual sample mean * *How much error can we expect * *How confident the sample represents the population
34
What characteristics must be examined for the Standard Error of the Mean?
* How large is the sample? | * The standard deviation of the sample
35
How does sample size affect the Standard Error of the mean?
* Small sample size is related to Type II error (not big enough to detect differences) * The larger the sample, the less error we should have in the estimate about the population (smaller standard error)
36
How does standard deviation of the sample affect the standard error of the mean?
* If the scores in my sample are very diverse (i.e., a lot of variation, a large SD), we can assume the scores in the population are also diverse * The larger the sample SD = the greater the assumed variation of scores in the population = the larger the standard error of the mean
37
Small samples with large SDs produce large standard errors. Why?
These characteristics make it more difficult to have confidence that the sample accurately represents the population *Conversely, a large sample with a small SD will produce a small standard error
38
Effect size
A measure of strength or magnitude of experimental effect. | A way of expressing the difference between conditions using a common metric
39
Why do we use effect size rather than other significance testing?
* When examining effects using small sample sizes, significance testing can be misleading because its subject to Type II errors. * When examining effects using large samples, significant testing can be misleading because even small or trivial effects are likely to produce statistically significant results.
40
Formulas for effect size
* Cohen's d | * Confidence Intervals
41
Cohen's d
Mean difference divided by the pooled standard deviation The effect size that expresses the difference between two means in standard deviation units
42
Cohen's d cut-offs
``` .2 = small effect .5 = medium effect .8 = large effect ```
43
Confidence Interval
A range of values within which the true differences between groups are likely to be
44
What p-values do a 95% CI and a 99% CI correspond to?
``` 95% = .05 99% = .01 ```
45
Degrees of Freedom
The minimum amount of data needed to calculate a statistic Determines that exact form of the probability distribution. N-1