Chapter 8: Confounding and obscuring variables Flashcards

(22 cards)

1
Q

Nine threats to internal validity

A
  1. Maturation
  2. History threat
  3. Regression to the mean
  4. Attrition
  5. Testing
  6. Instrumentation
  7. Observer bias
  8. Demand characteristics
  9. Placebo effect
    + Three threats of previous chapter (design confound, order effect, selection effects) → twelve in total
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Maturation threat

A

When we have multiple measures of our dependent variable, then maybe you see change, but this change occurs naturally and is not caused by your intervention/manipulation → spontaneous change in outcome
To rule out whether change is natural, we need a comparison group to see whether the change is there as well or to see if the change is different
Double-pretest design to get a better idea of the presence of maturation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

History threat

A

Change might be due to a history threat rather than the treatment we gave people
There could be an external event outside the experiment between the pretest and posttest, which co-occurs with the manipulation
Solution: control group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Regression to the mean threat

A

Extreme scores (positive/negative) at pretest because of lucky coincidence
Unlikely that the same participant get an extreme score like that again, more likely that it will be closer to the population average
Need for control group to check if there is regression to the mean (doesn’t rule it out, only detects it!)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Attrition threat

A

If we have a study where we measure people multiple times, there is always a possibility that some of the participants drop out after participating in the first part of the experiment
Usually not a problem, unless the people that leave differ systematically from those left (= systematic dropout)
Solution: remove participants who dropped out from the dataset

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Testing threat

A

Change in scores because of the fact that people were able to fill out the same test twice (pretest and posttest)
Effect of exercise, fatigue, boredom has an effect on the score
Solutions: remove pretest (because not crucial), use different instruments for pre- and posttest, add control group
How to check for testing effect: Solomon 4 group design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Solomon 4 group design

A

Experiment with four conditions: group 1 (pretest + treatment + posttest), group 2 (pretest + control + posttest), group 3 (treatment + posttest), group 4 (control + posttest)
Advantage: you can look at the pattern and see to what extent there are treatment and/or testing effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Instrumentation threat

A

Using different measurement instruments at the pre- and posttest → is the change in pre- and posttest because of the different measurement instead of the treatment?
Solutions: remove the pretest, pilot study, counterbalancing of test order

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Selection threats

A

We might have a control group, but still run into problems
Selection-history threat, selection-regression to the mean threat, selection-attrition threat, selection-testing threat, selection-instrumentation threat → rare, but possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Observer bias threat

A

Expectations of the researcher influence the results: they see what they want to see
Solution: double-blind design, masked design (singel blind design)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Demand characteristics threat

A

Participant’s belief about the goals of the study influence their own behavior because of their expactations
Solution: double-blind design, masked design (single blind design)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Placebo effect

A

If you believe that something will have an effect, these beliefs might already evoke/induce an effect
Solution: using a placebo group
Double blind placebo control study: researchers are not sure which participant is assigned to the placebo or treatment group, participants are also not sure which group they’re in

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Null-effects

A

You don’t find a (statistically significant) effect in your study
Rare in published articles: it’s already difficult to publish an article, and especially if you didn’t find an effect
Two ways to avoid it: pay attention to reliability and make sure sample is large enough

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Three causes of null-effects

A
  1. Small differences between groups: weak or unsuccessful manipulation of independent variable, insensitive measure of dependent variable, ceiling or floor effects
    → Solutions: manipulation check, rerun study with stronger manipulation, use more sensitive scale, sufficient variance
  2. Large differences within groups: measurement error, individual differences, situational variability/noise
  3. There is no effect in the population
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Ceiling effect vs floor effect

A

Ceiling: we reach the maximum score in both conditions and don’t see differences between groups
Floor: we reach the minimum score in both conditions and don’t see differences between groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Three sources of non-systematic variance and solutions to them

A
  1. Measurement error: use reliable measures, more measurement moments
  2. Individual differences: include more participants, use within-subjects design, matched design
  3. Situational variability/noise: use controlled environment
17
Q

Confidence interval

A

Tells us something about accuracy of an estimate
Gives you a range of possible values: there is a …% chance that the real average is between this interval
Determined by variability, sample size and constant

18
Q

Obscuring factors

A

Factors that can lead to null-effects

19
Q

Power

A

The ability to actually detect an effect in your study
Type 1 error (alfa): study concludes that there is an effect, when in reality there isn’t
Type 2 error (beta): study concludes that there is no effect, when in reality there is
Depends on: sample size, effect size, alpha, unsystematic variance

20
Q

P-value

A

Help us to decide if we should be impressed by the effects if there is no effect in the population
Cut off of .05

21
Q

Alpha

A

The risk you take to make a type 1 error (study concludes that there is an effect, when in reality there isn’t)
Alpha = 5% so p<.05
Alpha can be reduced and increased (decision should be made in advance!)
Repeatedly running the same analysis on different variables in a dataset can increase type 1 errors (= fishing problem)

22
Q

How to prevent type 1 and type 2 errors

A

Type 1: correcting p-values (Bonferroni correction: divide p-value by number of test you’re running)
Type 2: bigger sample, bigger effect size