Research Design and Statistics Flashcards
What is the sequence of the scientific method?
- Form a hypothesis
- Operationally define the hypothesis: what will be measured to show results?
- Collect & analyze data
- Disseminate results
Independent Variable: Define
The variable that is manipulated by researchers
The variable that is thought to impact the dependent variable
Dependent Variable: define
The outcome variable
What is hypothesized to change as a result of the IV
Predictor & Criterion Variables: Define
**Predictor: **Essentially the same as IV, but it can’t be manipulated
E.g. gender, age
Criterion: essentially the dependent variable
This is for correlational research
Can a variable have levels in a study?
Yes, especially the independent variable
E.g. Male & Female could be levels of the predictor variable
No treatment/Med Only/Combined treatment could be levels of the IV for treatment group
Factorial Designs
These have multiple IV’s
E.g. 1 IV is treatment; 2nd IV is type of schizophrenia
If you look at the effects of all levels on each other, it becomes a factorial design
What gives a study Internal Validity?
If you can determine a causal relationship between the IV and DV
No/limited effects of extraneous variables
Internal Validity in Multiple Group Studies: what impacts it?
The groups must be comparable to control for extraneous/confounding factors
Internal Validity: History
What is it? Any external event that affects scores on the dependent variable
Example: learning environment between groups is different, w/ one being superior
Internal Validity: Maturation
What is it? an internal change that occurs in subjects while the experiment is in progress
Example: time may lead to intellectual development, or fatigue, boredom, hunger may impact it
Internal Validity: Testing
What is it? practice effects
Example: take an EPPP sample test, attend a course, and then retake an exam to see if the course helped improve. but it may be just knowing what to expect on the test
Internal Validity: Instrumentation
What is it? changes in DV scores that are due to the measuring instrument changing
Example: raters may gain more experience over time. This is why we need highly reliable measuring instruments
Internal Validity: What is Statistical Regression?
What is it? extreme scores tend to fall closer to the mean upon re-testing
Example: if you test severe rated depression people, just by nature they are likely to report as less depressed next time regardless of any IV
Internal Validity: Selection
What is it? Pre-existing subject factors that account for scores on DV
Example: Classroom A students may simply just be smarter than Classroom B students, so regardless of different interventions they will score better
Internal Validity: Differential Validity
What is it? drop out is inevitable, so if you have 2 diff groups and there are differences in the type of people who drop out from each group, it can affect int. validity
Example: studying a new SSRI, some people may experience a worsening of depression/SI while on it, and they drop out. Because they dropped out, the med may appear to have been more helpful than it truly is
Internal Validity: Experimenter Bias
What is it? researchers preconceived bias impacts how they interact with subjects, which impacts the subjects scores
AKA: experimental expectancy effect, rosenthal effect, pygmalion effect
Example: experimenter unintentionally communicates expectations to subject
Prevention: double-blind technique
Protecting Internal Validity: Random Assignment
Each person has equal chance of ending up in a particular group
Protecting Internal Validity: Matching
What is it? ID subjects who are matched on an expected confounding variable, and then randomly assign them to treatment/control group
Ensures that both groups have equal proportion of the confounding variable
Protecting Internal Validity: Blocking
What is it? make the confounding variable another IV to determine to what extent it may be impacting the DV
Allows you to separate the effects of a variable and see interactions
Protecting Internal Validity: Holding the Extraneous Variable Constant
What is it? only use subjects who match the same on the extraneous variable
Problem: results not generalizable to other groups
Protecting Internal Validity: Analysis of Covariance
What is it? a stat strategy that adjusts DV scores so that subjects are equalized in terms of status on extraneous variables
Pitfall: only effective for extraneous variables that have been identified by the researchers
External Validity: Define
The degree to which results of a study can be generalized to other settings, times, people
Threats to External Validity: Interaction between Selection & Treatment
What is it? effects of a treatmetn don’t generalize to other target populations
Example: may work with college students, but not non-college students
Threats to External Validity: Interaction between History & Treatment
What is it? effects of treatment don’t generalize beyond setting and/or time period the experiment was done in