chapter 11 Flashcards

(44 cards)

1
Q

what is the problem in analyzing experimental data

A

If the independent variable has an effect on the dependent variable, the means for the experimental conditions should differ

Total variance = systematic variance + error variance

However, error variance can also cause the means to differ
- So, the condition means could be different even off the independent variable had no effect

Then, how do we know whether the difference among means is caused by the independent variable or by error variance?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

sources of error variance

A

individual differences
transient states
environmental factors
differential treatment
measurement error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

individual differences

A

pre-existing differences between people; this is the most common source of error variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

transient states

A

at the time of the experiment, participants differ in how they feel (e.g., mood, health, fatigue, interest)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

environmental factors

A

differences in the environment in which the study is conducted (e.g., noise, time of day, temperature)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

differential treatment

A

despite their best efforts, experimenters do not always treat all participants exactly the same

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

measurement error

A

unreliable measures increase error variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

3 solutions to understand whether the difference is meaningful

A

significance testing
effect size
confidence intervals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

significance testing

A

the probability that the difference between the groups is due to error variance

Estimate how much the means should differ if the independent variable has no effect

If the observed mean difference exceeds this amount, then the independent variable may be having an effect

We cannot be certain that the difference was caused by the independent variable, but we can know the probability that the independent variable caused the means to differ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

effect size

A

the size of the difference between the groups is noteworthy or not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

confidence intervals

A

the difference between the groups relative to the precision of the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

null hypothesis statistical testing

A

used to determine whether differences between the means of the experimental conditions are greater than expected on the basis of error variance alone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

null hypothesis

A

independent variable did not have an effect on the dependent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

experimental hypothesis

A

the independent variable did have an effect on the dependent variable

Although we are really interested in the experimental hypothesis, inferential statistics test the null hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

problems with null hypothesis

A

p-hacking

information it provides is not as precise and informative as other approaches

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

p-hacking

A

p-value fishing

when researchers over analyze their data in search of significance (needed for publication)

Performing many unplanned analyzes increases the likelihood of finding effects of the basis of type I error alone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

p-value

A

the probability that the obtained difference between the condition means is due to error variance

Ranges from .0 to 1.

The closer you get to 1, the more likely that the difference between the means is exactly what one could expect based on the amount of error variance

  • A p-value of 0.05 means the probability of getting our difference on the basis of error variance is 0.05 or 5%
  • A p-value of 0.01 means that the probability of getting our difference on the basis of error variance is .01 or 1 in a 100
  • A p-value of 0.001 means the probability of getting our difference on the basis of error variance is only 1 in 1000
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Rejecting the null hypothesis

A

researcher concludes that the null hypothesis is wrong and that the independent variable did have an effect (there is a group difference!)

19
Q

Failing to reject the null hypothesis

A

researcher concludes that the null hypothesis is correct and that the independent variable did not have an effect (there is no group difference!)

We cannot say that we “accept” the null hypothesis, because the null hypothesis can never be proven as there may be several other explanations

20
Q

Statistically significant

A

has low probability of occurring as a result of error variance alone; when we reject the null hypothesis with a low probability of making a Type I error - the difference

21
Q

type I error

A

a researcher rejects the null hypothesis when it is true (i.e., showing sig finding when in fact there was no real difference between groups)

alpha
false positive

example:
- finding a drug is effective when it is in fact is not
- sending an innocent person to jail

22
Q

alpha

A

the probability of making a Type I error (and erroneously believing that an effect was due to IV when it was actually due to error variance)

23
Q

false positive

A

test shows positive when in fact there is no virus

24
Q

type II error

A

a researcher fails to reject the null hypothesis when it is false

beta
false negative

example:
- failing to find that a drug works when in fact it does
- failing to send a guilty person to jail

25
type I vs type II error
You have a choice! --> You can either set Alpha (type I error) low or Beta (type II error) low ---> But when you set Alpha low, you will increase your type II error --> Set the probability of committing type I error (Alpha level) low -- Pick Alpha 0.001 instead of 0.05 -- Probability of committing type II error goes up
26
typically in behavioral science (alpha)
We set the Alpha level at 0.05 or 0.01 or 0.001 We set Beta level (probability of committing type II error) 0.80 (i.e., 80%)
27
power
probability that a study will reject the null hypothesis when it is false; probability that a test will reject the null hypothesis when the alternative hypothesis is true Ability of your design to show that a drug is effective, when in fact it is Power = 1-Beta Beta == type II error Power increases with the sample size
28
power analysis
used to decide how many participants are needed to detect a significant effect
29
effect size
magnitude of effect of IV on DV or the strength of the relation between two variables
30
effect size helps determine real-world relevance:
An effect can be statistically significant but too small to matter practically An effect can be non-significant but potentially impactful if the study is underpowered
31
effect size indicators
r-squared or eta-squared or omega-squared cohen's d odds ration (OR)
32
r-squared or eta-squared or omega-squared
the proportion of variance in the dependent variable accounted for by the independent variable
33
cohen's d
the difference between the two means relative to the size of the standard deviation of the scores
34
odds ratio (OR)
the odds of an event occurring in one group relative to another group = 1 → if the event is equally likely in both groups, the odds ratio is 1 > 1 → greater than 1 shows that the odds of the event are greater in one group than in another < 1 → less than 1 indicates that the odds in one group are less than in the other (0.5 means half the odds)
35
effect size and significance
magnitude vs significance? - A p-value indicates whether an observed result is statistically significant - Effect size indicates the strength of the observed phenomenon
36
ex study 1: p<0.01, Cohen's D=0.2
large sample, small effect significant effect although the effect is statistically significant due to a large sample size, the small effect suggests the drug's practical impact is minor
37
ex study 2: p=0.07, cohen's D=0.9
small sample, large effect although the p-value isn't below the significance threshold, the large effect size indicates the drug could have a real impact the non-significance may be due to the N, and increasing the sample size could make this effect significant
38
effect size and power
Power depends on the effect size: larger effect sizes increase the power, making it easier to detect true differences or associations When planning a study, researchers use the expected effect size to calculate the necessary sample size to achieve sufficient power (often 80% or higher) - If the effect size is small, you will need larger sample size to boost the power - If the effect size is large, you may need a smaller sample to have enough power to detect the effect you are looking for
39
confidence intervals
range of values around the point estimate in which the population value is most likely to fall Gives us a sense of how much faith we can have in the statistics we calculate in a study
40
How do we interpret the confidence interval for a sample mean?
A range of values in which the population mean likely falls
41
Confidence interval gives:
point estimate interval estimate
42
point estimate
its most likely value based on what we know from our sample data
43
interval estimate
range of values around the point estimate in which the population value is most likely to fall
44
relationships (power; effect size; sample; CI)
Effect size → power will increase (easier to detect real effect) Sample size → power will increase; CI narrows (more precise); less Type II error (easier to detect effect) Power → increased likelihood of detecting real effect (1-Beta); reduces type II CI width decreases → estimate precision increases; significance testing becomes more reliable