Midterm Flashcards
Belmont Report
1) Beneficence: risk-benefit analysis of findings vs. harm
2) Autonomy: respect for participants and their decisions
3) Justice: fairness in accepting risk and receiving benefits
APA Code of Ethics
1) Beneficence: risk-benefit analysis of findings vs. harm
2) Fidelity and responsibility: maintaining trust and following through
3) Integrity: don’t lie, cheat, plagiarize, etc.
4) Justice: fairness in accepting risk and receiving benefits
5) Respect: respecting individual differences, respecting consent, being aware of own biases
Six steps of a research project
1) Ask a question stemming from a theory
2) Develop a specific and testable hypothesis
3) Select a method and design the study
4) Collect the data
5) Analyze data and draw conclusions
6) Report findings
How do we minimize harm?
1) Informed consent
2) Debriefing
3) IRB
What defines experimental design?
Must have manipulation of independent variables and random assignment
What is a quasi-experimental or subject variable?
A trait that cannot be changed about the participant, but participants can be grouped based on these traits (height, shoe size, age, eye color, etc.)
Internal validity
The extent to which causal conclusions can be substantiated
External validity
The extent to which results can be generalized
Construct validity
The degree to which variable operations accurately reflect the construct they’re designed to measure (free from systematic error)
Criteria for causality
1) Relationship between variables
2) Causal variable precedes affected variable
3) No possibility of a third variable affecting both (confounding)
What makes a true experiment?
A true experiment has internal validity
Reliability
The extent to which a measure is consistent (free from random error)
Ways to measure reliability
1) Test-retest reliability
2) Internal consistency
3) Inter-rater reliability
Test-retest reliability
If you measure the same individuals at two different points in time the results should be highly correlated
Internal consistency
Whether the individual items in a scale correlate well with each other – Cronbach’s Alpha assesses the correlation of each item with each other
Inter-rater reliability
The agreement of observations made by two or more judges
Ways to measure construct validity
1) Face validity
2) Content validity
3) Convergent validity
4) Discriminant validity
5) Predictive validity
6) Concurrent validity
Face validity
How obvious it is to the participant what the test is measuring
Content validity
Whether experts believe the measure relates to the concept being assessed
Convergent validity
The measure overlaps with a different measure that is intended to tap the same theoretical construct (the participant should be able to fill out two surveys and get correlating results)
Discriminant validity
The measure does not overlap with other measures that are intended to tap different or opposite theoretical constructs
Predictive validity
The measure’s ability to predict a future behavior or outcome
Concurrent validity
The extent to which the measure corresponds with another current behavior or outcome
Nominal scale
Numbers stand for categories but mean nothing themselves (male = 1, female = 2)