Validity of Experimental Design Flashcards
What is validity?
The extent to which a concept, conclusion, or measurement is well-founded and corresponds accurately to the real world
The degree to which the tool measures what it claims to measure (validity of a measurement tool)
The degree to which the study establishes the relationship that is claims to establish (validity of a study)
What is experimental research?
The investigator manipulates conditions for the purpose of determining their effort on behavior
Example: randomized controlled trial
What is non-experimental research?
Non manipulative and observational
Examples: case studies, cohort studies, surveys, correlational studies
What are the three things that a true experiment requires?
Independent variable is manipulated by the researcher and dependent variable that is measured
Control group or comparison group
Subjects are assigned to groups using a randomization or pseudo-randomization process
What is the primary goal of research design?
To decrease or control the influence of attribute and extraneous variables as much as possible
What is an attribute variable?
Preexisting; level is inherent, not manipulated: physical, psychological or personal
Examples: Sex, race, age, educational attainment, hypertension
What are extraneous variables?
Factors not directly related to the purpose of the study, but that may affect the relationship between the variables that an experimenter is examining
Example: time of day they are seen
How do you control attribute/extraneous variables?
Homogeneity (use subjects that are homogeneous for the potential confounding variable; only recruit men or only in a certain age range)
Include in design (include the potential confounding variable in the study and analyze the results separately)
Matching (match subjects across groups in various levels of the independent variable on relevant characteristics)
Statistical control (perform analysis of covariance to determine if the presumed confounding variable had an effect on the outcome; if so, include the variables as a factor or covariate)
Repeated measures (if the variable changes over time or is context dependent)
Randomization (the best way to control; all possible confounding variables are distributed similarly across groups)
What is the analysis of covariance (ANCOVA)?
A statistical method that allows researchers to examine the effect of an independent variable on a dependent variable while controlling for the influence of a third variable (covariate)
It adjusts the dependent variable scores based on the covariate, providing a more accurate analysis by removing potential confounding variability in the outcome
What is random assignment?
Each subject has a known chance of being assigned to each group
Random assignment is used to equilibrate all potentially prognostic indicators across groups
Anything that might influence the outcome (confounder) is balanced across groups (ceteris paribus)
What is random selection?
Randomly choosing subjects from the accessible population
Rarely, if ever, accomplished
When should control groups be used?
To rule out the influence of extraneous effects
Compared to the experimental group
If the only difference between the experimental and control groups is the level of the IV, and if the treated group improves, but the control group does not, then we assume our manipulation caused the improvement (cause-and-effect)
In the best studies, the only difference between the experimental and control groups is the level of the IV
What is a single blind study?
Investigator or subject is blinded to group assignment
What is a double blind study?
Both subjects and researcher are blinded
What does blinding do for the study?
Blinding protects from subject and researcher bias by disguising group assignment and hypothesis
Blinding is more important when the outcome measured is subjective (you don’t have to blind if the outcome is death)
What is statistical validity?
There is a relationship between the independent variable and the dependent variable
Concerns the potentially inappropriate use of statistical procedures for analyzing data
Leads to invalid conclusions about the relationship between the IV and DV
What is power?
The probability of finding a statistically significant difference if a real difference exists
Conventionally it is set at 80%
What does power depend on?
Sample size (all else being equal, a larger sample means more power)
Variability (the greater the heterogeneity, the more difficult it is to find a difference)
Effect size (how large is the difference)
What are the types of error?
Type I - reject the null hypothesis when a true difference does not exist (false positive)
Type II - failure to reject the null when really, a significant difference does exist (false negative)
Do small samples yield low power and enhances the probability of error II?
Yes
What is internal validity?
Evidence that the independent variable causes the dependent variable
Ruling out alternative explanations for observed changes attributable to the intervention
You must be able to establish that the IV (e.g., therapy) and only the IV caused a differential change in the DV across groups
True experiments have high internal validity due to the controlling properties of randomization and control groups
What is external validity?
If the findings can reasonably be generalized beyond a specific experimental situation
The extent to which the results of a study can be generalized beyond the internal specifications of the study sample
Concerned with the usefulness of the results in the real (clinical) world
Does clinical research lack complete control?
Yes
But it can closely replicate the clinical world
Absolute control is only possible in a test tube
If factors can’t be controlled and they may influence findings, they represent limitations to the study
Need to weigh the merits of control over internal and external validity (as one goes up the other goes down)
What are the hallmarks of an experimental design?
Group assignment (preferably, random)
Concurrent control group