Module 9 Flashcards
(28 cards)
Experiments research
In an experiment, researchers manipulate at least one variable and measure at least one other variable.
Two major types of experiments are lab and field experiments.
Lab experiment
A lab experiment is an experiment in an artificial environment.
Field experiments
As the name already indicates, field experiments do not take place in a lab but in the field. Nope, not in “a” field, but in “the” field, which means: in a natural setting.
Field experiments are carried out in the everyday, real-life environment of the participants.
As in lab experiments, the researcher manipulates the independent variable(s), but in a field experiment (in contrast to a lab experiment)
the setting,
the participants,
the manipulation(s)/treatment(s), and
the outcome measures
are all authentic. Field experiments are typically carried out unobtrusively, i.e., without participants realising they are participating in an experiment.
When is a lab experiment preferable over a field experiment?
When researchers want maximum control over the research environment to rule out alternative explanations. Because of this control, lab experiments on average show higher internal validity than field experiments.
When is a field experiment preferable over a lab experiment?
When it is essential to measure real-world behaviour in real-world situations, i.e., when high external validity is crucial. After running a lab experiment, researchers can only speculate to what extent their findings would apply to a real-world setting. After running a field experiment, they know their findings apply to a real-world setting.
When researchers want to study the long-term effects of manipulation(s). The immediate effect of a manipulated variable may differ from the long(er)-term effect. A field experiment can run for a longer period of time to study these long-term effects.
Basic terminology and idea of an experimental research study:
In an experiment, researchers manipulate at least one variable and measure at least one other variable.
For explanatory purposes, we consider the case of a simple experiment with one independent variable and one dependent variable.
The measured variable is the dependent variable. The manipulated variable is the independent variable. The levels of the manipulated variable are also referred to as conditions.
An experiment needs at least two conditions, so the researchers can compare one condition to another. One of these conditions can but does not have to be a control group. Not every experiment needs a control group, and often a clear control group (which represents a neutral condition) may not even exist.
Within-subjects vs. between-subjects designs
Although all experiments are similar in that researchers manipulate one variable and measure another, experiments can take many forms. One of the most basic distinctions is between
Within-subjects designs and between-subjects designs.
Within-subjects designs
In a within-subjects design, each subject (participant) is presented with all levels of the independent variable.
An example:
You wonder whether and to what extent a monetary bonus (the IV) increases employees’ work motivation (the DV). You give all employees in a company a bonus, and you compare their work motivation at time t-1 (before the bonus) with their work motivation at time t (after the bonus).
A within-group design requires fewer participants. If a 2x2 within-group design uses 50 people in each cell of a study, it would only need 50 people in total because every person participates in each of the 4 cells. If a 2x2 between-group design uses 50 people in each cell, it would need 200 people in total because every person participates in only 1 cell. Within-group designs make efficient use of participants.
In a between-subjects design
In a between-subjects design, different groups of subjects are assigned to different levels of the independent variable.
An example is the bonus experiment we discussed before:
After graduating from Tilburg University, you land a job as an HR manager at a local company. You wonder whether and to what extent a monetary bonus vs. a non-monetary bonus (the IV) increases employees’ work motivation (the DV). You randomly allocate participants to conditions, ask them to work on a task, and make sure that the only thing that differs between the groups is that you give a monetary bonus to group 1 and a non-monetary bonus to group 2.
Business research typically uses between-subjects designs.
Two basic forms of between-subjects designs
Two basic types of between-subjects designs are:
The posttest-only design and the pretest/posttest design.
Posttest-only design
The posttest-only design is the simplest between-subjects design. In this design:
Subjects are randomly assigned to the levels of the independent variable.
The dependent variable is then measured once.
An example:
You wonder whether and to what extent a monetary vs. non-monetary bonus (the IV) increases employees’ work motivation (the DV). You randomly allocate participants to conditions, ask them to work on a task, and make sure that the only thing that differs between the groups is that you give a monetary bonus to group 1 and a non-monetary bonus to group 2. You measure their job motivation after the task. This allows you to test whether job motivation is higher in one group than the other.
Pretest/posttest design
In a pretest/posttest design:
Subjects are randomly assigned to the levels of an independent variable.
The dependent variable is measured twice: once before and once after exposure to the independent variable.
You wonder whether and to what extent a monetary vs. non-monetary bonus (the IV) increases employees’ work motivation (the DV). You randomly allocate participants to conditions, ask them to work on a task, and make sure that the only thing that differs between the groups is that you give a monetary bonus to group 1 and a non-monetary bonus to group 2. You measure their job motivation before and after the task. This allows you to test whether job motivation changed more for employees in group 1 than in group 2.
why use posstest only ?
So why , Why would researchers ever use a posttest-only design? Why not always use a pretest/posttest design, so they can be sure groups are equal before they experience a manipulation?
In rare circumstances, it may be problematic to measure the dependent variable beforehand, as it may influence the second measurement. If the pretest makes participants change their subsequent behavior/reaction, a pretest should be avoided.
An example:
You are interested in the effect of a television commercial on consumers’ brand attitude, and use a pre-test and a post-test. It is possible that, because of the pre-test, participants in the experiment watch the television commercial more attentively than consumers that do not participate in the experiment. Therefore, a post-test-only design may be preferable.
In business research, pretest/posttest designs are typically preferable over posttest-only designs.
What is The factorial design ?
The factorial design
What happens when we add more than one independent variables (that can either be manipulated or measured)?
Adding an additional independent variable allows researchers to look for an interaction or moderator effect – whether the effect of one independent variable depends on the level of the other independent variable.
When researchers want to test for interactions, they do so using factorial designs. In a factorial design, researchers combine the two independent variables: they study each possible combination of the independent variables.
Researchers can manipulate each independent variable in a factorial design as within-subjects or between-subjects.
so how is it done for within subject and for between subject and also what happens when mixed
In a between-subjects factorial design:
Both independent variables are studied as between-subjects. Therefore, if the design is a 2x2 design, there are four different groups in the experiment. In other words, there are different subjects in each cell. Each of these subjects is only subjected to one treatment.
In a within-subjects factorial design:
Both independent variables are manipulated as within-subjects. Therefore, if the design is a 2x2 design, there is only one group in the experiment but they participate in all four cells (or combinations) of the design.
In mixed factorial designs:
one independent variable is manipulated as between-subjects and one independent variable is manipulated as within-subjects. Imagine a 2x2 mixed factorial design with 50 people per cell. What is the number of subjects in this study? The correct answer is 100
how to write when its More than two levels of an independent variable
So far, we focused on independent variables with two levels. For example, a 2x2 factorial design has two independent variables, each with two levels, creating four conditions (2x2 = 4). However, an independent variable can have more than two levels. The variable “education level,” for example, can have three levels: primary, secondary, or higher education.
The notation for a factorial design with two independent variables is “a x b”, where:
- a indicates the number of levels of the first independent variable
- b indicates the number of levels of the second independent variable
An example:
A 3x4 factorial design has two independent variables, one with three levels and one with four levels. It results in 12 cells (3x4 = 12).
Analysing experimental designs
One IV (design type: 2 experimental conditions): T test, Univariate analysis of (co-) variance (one way Anova)
One IV (design type:Two or more
experimental conditions): Univariate Analysis of (co-) Variance (One-
Way ANOVA)
More than One IV : Multivariate Analysis of (co-) Variance (e.g., Two-Way ANOVA)
in general one IV with 2 conditions mostly do T test , in cases with one IV and if IV have more than 2 levels do one way ANOVA analysis , and if u have more than one IV need to do a 2 way Anova (if u have 2 IV) or 3 way anova (if u have 3 IV)
Measurement reliability in experimental research
In experimental studies, the dependent variable is measured and the independent variable is manipulated.
The measured dependent variable can either be very concrete (e.g., the number of M&Ms eaten) or more abstract (e.g., the perceived tastiness of the M&Ms). When the dependent variable is concrete, a single-item measure is typically sufficient. For more abstract variables, multi-item measures must be used.
To demonstrate the internal consistency (reliability) of multi-item measures,
Cronbach’s alpha is calculated. If Cronbach’s alpha is acceptable (>.70), the items in a measurement instrument are internally consistent and therefore the measurement instrument is reliable. One can then average the scores on the items to create a construct score for the dependent variable.
how to write when its More than two independent variables
Sometimes, research studies have three independent variables. Such a design is called a three-way design. For example, in a 2x2x2 factorial design, there are two levels of the first independent variable, two levels of the second independent variable, and two levels of the third independent variable.
This leads to eight cells or conditions in the experiment (2x2x2 = 8).
Three-way factorial designs are complex to interpret. A three-way interaction means that the two-way interaction between two of the independent variables depends on the level of the third independent variable
Measurement validity in experimental research
The validity of measured variables (such as the dependent variable in an experiment) can be demonstrated by
- Providing precedence (has this measure been used before?)
- Using sound logic (why does this measure capture the variable?)
The validity of manipulated variables can be demonstrated by
- Providing precedence (has this manipulation been used before?)
- Using sound logic (why does this manipulation capture the variable?)
- Manipulation checks
Whats a Manipulation check
A manipulation check: is a test used to determine the effectiveness of a manipulation in an experimental design. It is used to ensure that the participants understood the manipulation the way the researcher intended.
A manipulation check is only necessary when researchers want to manipulate feelings or beliefs of the participants, i.e., when their intention is to make participants think or feel certain ways (e.g., time-constrained, uncertain, optimistic, etc.).
A manipulation check is not necessary when participants are manipulated to behave in a certain way, because the researcher can simply observe participants to make sure they were actually behaving as intended.
Suppose you would like to study whether students learn more when they take notes using laptops versus pens. You randomly assign students to two groups and have one group use pens and the other group use laptops. You do not need an extra question to check whether students used laptops versus pens, as you can simply observe this.
Internal validity refers to what ?
Internal validity refers to the ability to draw valid conclusions about the causal effects of the independent variables on the dependent variable. In experiments, internal validity can be threatened in various ways.
internal validity can be threatened in what ways ?
1) Experimenter bias
Experimenter bias occurs when the experimenter (intentionally or unintentionally) affects data, participants, or results in an experiment because he is unable to remain objective. Most experiments are designed in a way that reduces the possibility of bias-distorted results. In general, biases can be kept to a minimum if experimenters are properly trained and clear rules and procedures are put in place for the experiment
(Steps can be taken to reduce the likelihood of its occurrence such as conducting blind studies and minimising exposure to experimenters.
In a blind study, all the information that may influence the outcome of the experiment is withheld from the experimenters and the participants (e.g., when participants are unaware of the hypothesis, they will not be able to influence the outcome of the experiment).
The less exposure respondents have to experimenters, the less likely they are to pick up any cues that would impact their answers. One common way to minimise the interaction between participants and experimenters is to pre-record the instructions.
)
2)Design confounds
With design confounds, there is an alternative explanation for the causal effect because the experiment was poorly designed: another variable happened to vary systematically along with the independent variable.
Threats to internal validity :
- History effect: Events/factors outside the experiment have an impact on the DV during the experiment
- Maturation effect: Biological/psychological changes over time
- Testing effect: Prior testing affects the DV
- Instrumentation effect: The observed effect is due to a change in measurement
- Selection bias effect: Incorrect selection of respondents (experimental and/or control group)
- Mortality effect: Drop out of respondents during experiment