Ch. 9 Flashcards Preview

SSCI Ch.14 > Ch. 9 > Flashcards

Flashcards in Ch. 9 Deck (44):

Experiment: Three critical steps

1. start with a causal hypothesis; 2. modify one specific aspect of a situation that is closely connected to the cause; 3. compare outcomes.


Experimental Research:

offers the strongest tests of causal relationships.


Experiment: Three conditions for causality

1. temporal order in which the independent precedes the dependent variable; 2. evidence of an association; 3. ruling out alternative causes.


Experimental technique:

Usually best for issues that have a narrow scope or scale.


Confounding variables:

In experimental research, factors that are not part of the intended hypothesis being tested, but that have effects on variables of interest and threaten internal validity.


Social Science experiments:

1. empirically based 2. theory-directed.


Empirically based experiment:

To determine whether an independent variable has a significant effect on a specific dependent variable.


Random assignment

Participants divided into groups at the beginning of experimental research using a random process so the experimenter can treat the groups as equivalent.


Randomly assign:

We sort a collection of cases into two or more groups using a random process.


Random sample:

We select a smaller subset of cases from a far larger collection of cases.



A traditional name for participants in experimental research.



The independent variable in experimental research.


Experiment: Parts

1. Treatment or independent variable; 2. Dependent variable; 3. Pretest; 4. Posttest; 5. Experimental group; . 6. Control group; 7. Random assignment.


Dependent variables:

or outcomes in experimental research, as the phusical condiditions, social behaviors, attitudes, feelings, or beliefs of participants that change in response to a treatment.



An examination that measurer the dependent variable of an experiment prior to the treatment.



An examination that measurer the dependent variable of an experiment after the treatment.


Experimental group:

The participants who receive the treatment in experimental research.


Control group:

The participants who do not receive the treatment in experimental research.



A lie by an experimenter to participants about the true nature of an experiment or the creation of a false impression through his or her actions or the setting.



A person working for the experimenter who acts as another participant or in a role in front of participants to deceive them with an experiment's cover story.


Cover story:

A type of deception in shich the experimenter tells a false story to participants so they will act as wanted and do not know the true hypothesis.


Experimental design:

The planning and arranging of the parts of an experiment.


Classical experimental design:

An experimental design that has random assignment, a control group, an experimental group, and a pretest and posttest for each group.


Preexperimental designs:

Experimental plans that lack random assignment or use shortcuts and are much weaker than the classical experimental design; are substituted in situations in which an experimenter can not use all of the features of a classical experimental design but the design has weaker internal validity.


One-shot case-study design:

An experimental plan with only an experimental group and a posttest but no pretest.


Static group comparison design

An experimental plan with two groups, no random assignment, and only a posttest.


Quasi-experimental designs:

Plans that are stronger than preexperimental ones; variations on the classical experimental design used in special situations or when an experimenter has limited control over the independent variable.


One-group Pretest-Posttest Design:

Has one group, a pretest, a treatment, and a posttest. it lacks a control group and random assignment.


Posttest-only nonequivalent group design:

A static group comparison. Has two groups, a posttest, and treatment. It lacks random assignment and a pretest.


Interrupted time-series design:

An experimental plan in which the dependent variable is measured periodically across many time points and the treatment occurs in the midst of such measures, often only once.


Equivalent time-series design:

An experimental plan with several repeated pretests, posttests, and treatments for one group often over a period of time.


Latin square design:

An experimental plan to examine whether the order or sequence in which participants receive versions of the treatment has an effect.


Solomon four-group design:

An experimental plan in which participants are randomly assigned to two control groups and two experimental groups; only one experimental group and one control group receive a pretest; all four groups receive a posttest.


Factorial design:

An experimental plan that considers the impact of several independent variables simultaneously.


Interaction effect

A result of two independent variables operating simultaneously and in combination on a dependent variable; is larger than a result that occurs from the sum of each independent variable; is larger than a result that occurs from the sum of each independent variable working separately.


Design notation

A symbol system used to show parts of an experiment and to make diagrams of them.


Internal validity

The ability of experimenters to strengthen the logical rigor of a causal explanation by eliminating potential alternative explanations for an association between the treatment and dependent variable through an experimental design. Occurs when the independent variable, and nothing else influences the dependent variable.



An object in experimental research studies; refers to the type of confounding variable that is not part of the hypothesis but affects the experiments operation or outcome. In field research studies, it refers to physical objects that human created that have cultural significance; specifically, objects that members use or to which they attach meaning that we study to learn more about a cultural setting or its members.


Selection bias

A preconception that threatens internal validity when groups in an experiment are not equivalent at the beginning of the experiment with regard to the dependent variable.


History effect

Resut that presents a threat to internal validity because of something that occurs and affects the dependent variable during an experiment, is unplanned and outside the control of the experimenter.


Maturation effect

A result that is a threat to internal validity in experiments because of natural processes of growth, boredom, and so on that occur during the experiment and affect the dependent variable.


Testing effect

A result that threatens internal validity because the very process of measuring in the pretest can have an impact on the dependent variable.


Experimental morality

Threat to internal validity because participants fail to participate through the entire experiment.


Statistical regression effect:

A threat to internal validity from measurement instruments providing extreme values and a tendency for random errors to move extreme results toward the average.