Chapter 7: Simple experimental research Flashcards
(18 cards)
Experimental variables
Independent variable is manipulated by researchers
Dependent variable is measured
Control variables are kept consistent
→ we create groups/conditions that only differ from each other on one point: the manipulated (independent) variable(s) → we come close to causality
Three criteria for causality and how they are tested in experiments
- Covariance: test if there is a difference between conditions
- Temporal precedence: independent variables are manipulated before dependent variables are measured
- Rule out alternative explanations: if the only difference between groups is the independent variable
Systematic vs unsystematic variance
Systematic: a third variable that systematically varies with your manipulated independent variable → problematic because this could be an alternative explanation
Unsystematic: the third variable does not vary systematically with the independent variable → not problematic because the third variable is not important for the dependent variable
Three ways to rule out alternative explanations
- Design confounds
- Selection effects
- Order effects
Design confound
A third variable that systematically covaries with the independent variable in the experiment
Solution: controlling variables by keeping everything constant, so there can be no other difference than the independent variable in both groups
Selection effect
Systematic differences between participants in different groups/conditions
Only relevant in between-subject designs
Solutions: random assignment and matched groups (dividing people in a group based on one characteristic)
Order effect
Participants show a response pattern because of the order of conditions → answers can be different than when we would offer the conditions in another order
Practice effect: learning by having already gone through previous conditions which leads to a different score
Carry-over effect: what you had to do in the first condition has an impact on how you will react in the second condition
Solution: counterbalancing
Two types of designs
- Between-subjects design: posttest-only or pretest/posttest
- Within-subjects design: repeated-measures of concurrent-measures
Between-subjects design: posttest-only design
Different groups or conditions in which we manipulate our independent variable
We assign our participants randomly to one of both conditions
After the assignment and manipulation of the independent variable, we measure the dependent variable
Between-subjects design: pretest/posttest design
Measuring the dependent variable twice: once before we manipulate it and once after
Advantages: look whether scores differ in both conditions, look at change in every condition, rule out selection effects
Within-subjects design: repeated measures design
Every participant experiences both conditions
After each condition, the dependent variable is measured
Within-subjects design: concurrent measures design
We work with two conditions, but we offer them at the same time → all participants still experience all the conditions
Three advantages of within-subjects design
- Participants in all conditions are equivalent, and thus form their own control group (no risk of selection effects)
- Researchers have more power to detect significant differences between conditions during statistical analyses
- Less participants needed than compared to between-subjects design
Counterbalancing
Offering the conditions in different orders
Full counterbalancing: creating all possible orders of all the conditions and randomly assign participants to one of them
Partial counterbalancing: not creating all possible orders → randomization and Latin square
Latin square
We create a certain order, but these orders meet some criteria
Example: you have six conditions (A-F), which means there are six orders → the first sequence/order has to satisfy a particular pattern (1, 2, n, 3, n-1, 4) → add one to every condition in the next group (A becomes B, B becomes C, …) until you have six condition
Every letter has had every place and is going to be preceded and followed by each condition
Three downsides of within-subjects design
- Order effects (if we don’t use counterbalancing)
- It is not always possible to use counterbalancing or a within-subjects design
- Risk of demand characteristics (getting an idea of the purpose of the experiment and change behavior) because participants experience all the conditions
Two ways to check how well the independent variables are manipulated
- Manipulation check: measure how people score on the independent variable to see if the groups differ from each other (only meant to check if manipulation was succesful)
- Pilot study: we only show that our groups differ on the independent variables and don’t continue with the rest of the experiment
Cohen’s d
Used for effect size
The bigger the difference between means, the bigger the Cohen’s d
The smaller the variance, the bigger the Cohen’s d