Exam 1 Flashcards

(102 cards)

1
Q

Intuition

A
  • Intuition: reliance on “guts feelings,” emotions,
    and/or instincts – “I feel that this is true,” “I have a hunch”
  • Example: Gambler’s fallacy
  • Strengths? – May allow us to “know” things we otherwise could
    not know
    – May help avoid “analysis paralysis”
  • Weaknesses? – Driven by cognitive and motivational biases rather
    than logical reasoning or scientific evidence
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  • Authority:
A
  • Authority: accepting a new idea
    because an authority figure states
    that it is true
  • Strengths? – Efficient, conserves effort
  • Weaknesses? – Can be wrong – May not be trustworthy – Can hinder our own judgment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Rationalism

A
  • Rationalism: using logic and reasoning to
    acquire new knowledge
  • Example: H2O must be a solid, liquid, OR a gas – Reasoning: “Water’s state depends on
    temperature. It can’t be freezing cold, room
    temperature, and boiling hot all at once!”
  • Strengths? – Isn’t limited to sensory observation
    – Checked by rules of logic and internal consistency
  • Weaknesses? – Error in premise or logic ⟶wrong conclusion!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Errors in Rationalism: Water’s Triple Point

A

Get the youtube video.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Empiricism

A

Empiricism: acquisition of knowledge
through observation and one’s
experiences – Core of the scientific method
* Strengths? – Relatively easy to test – Allows for self-correction
* Weaknesses? – Limits to what we can observe – Can be organized/interpreted in biased way

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does it mean to know?

A
  • To truly “know” is to reject uncertainty – But this isn’t possible (at least entirely) * Instead, consider: How much are you
    willing to not know? – In other words, what is your tolerance for
    uncertainty?
  • Answer to the question depends on the
    issue, problem, or context – We will eventually talk about Type I and
    Type II error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  • We cannot rely on any single method of knowledge acquisition
A
  • Certain means of acquiring knowledge may be better suited for different
    purposes – Idea generation: Intuition
    – Analysis: Rationalism
    – Experimentation: Empiricism (Science!) * Important to maintain a healthy attitude of skepticism– = pausing to consider alternatives and search for evidence – This does not mean being cynical or distrustful!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Science?

A
  • Better conceptualized as a process rather than a product * The scientific process has the following features:
    1. Uses systematic empiricism ⟶observations are carefully planned, conducted,
    recorded, and analyzed
    – Allows observations to be replicable
    2. Asks empirical questions ⟶questions can be answered through observation3. Creates public knowledge ⟶knowledge is made widely available* Scientific claims must be falsifiable
    – = it must be possible to disprove claims if they are wrong – Allows for improvement/self-correction
  • These features help differentiate between science and pseudoscience
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Pseudoscience

A
  • Pseudoscience: activities and beliefs that are claimed to be scientific by
    their proponents (and may appear to be scientific at first glance) but are
    not – Lacks the support of scientific research (or disconfirming research is ignored)– Often does not address empirical questions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Red Flags of Pseudoscience

A
  1. Does not build on existing scientific
    knowledge
  2. Uses language that sounds impressive
    but is actually vague or meaningless
  3. Excessive reliance on anecdotes or
    testimony as evidence
  4. Focuses heavily on “proof” (rather than
    acknowledging areas for more research)
  5. Fails to acknowledge the conditions under which
    their claims do NOT hold true
  6. Tends to use loopholes to explain away
    counterevidence and prevent claim from being
    disproven
  7. Suppresses or distorts unfavorable data
  8. Refuses to self-correct with new evidence
  9. Overreliance on opinions of authority figures, especially “false
    authorities” (people with no real expertise)
  10. Evades peer review, slanders critics, and/or places burden of proof on
    skeptics
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why discuss pseudoscience in a class about science?

A
  • Pseudoscience can be harmful – E.g., HIV/AIDS denialism
  • Dissecting pseudoscience helps highlight the
    importance of the fundamental features of
    science
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Goals of Psychological Science

A

Describe a phenomenon (e.g., behavior, cognition, emotion);

Explain,organize, and understand that phenomenon;

Predict outcome(s) of the phenomenon and/or link 2+ phenomena across time

Control: apply what we learned about the phenomenon to the real world

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is Psychology?

A

Psychology is the scientific study of cognition, emotion, and behavior– True psychology uses the scientific method

  • By its nature, a lot of psychological phenomena are difficult to observe– We are inherently limited by the complexity of what we study – Even what can be observed, like behavior, is generated from a place that’s
    difficult to access (the brain)
  • Recall: Allegory of the Cave – There are many things in psychology that we don’t yet understand…– …but we’ll get there someday!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Experimental vs. Clinical Psychologists

A
  • A psychologist is someone who holds a doctorate in psychology* Two main kinds of psychologists

– Experimental: research psychological processes (e.g., attention, memory,
reaction times) in basic and applied forms  Most are not trained in clinical work (e.g., therapy, psychological treatment)

– Clinical: trained in diagnosing and treating psychological disorders and
related problems  Many also conduct research on these clinical topics
* Boundaries between “experimental” and “clinical” research are often
blurred

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The Scientific Method

A
  • How many steps are in the scientific
    method?
    – It actually doesn’t matter!
    – The “steps” are arbitrary and can be
    broken down simply or in detail
  • Although often represented as linear,
    the scientific method is cyclical

– Research questions turn into research
literature

– Research literature inspires new
questions in readers!

  • In reality, the scientific method has
    nested cycles �
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Forming a Research Question

A
  • Informal observations – “I saw on the news that someone with an IQ of 70 was executed in Georgia.
    Isn’t that not allowed?”
  • Practical problems – “I’m a defense attorney, and I’m worried about that expert the prosecution
    hired to give testimony. What if he’s biased?”
  • Previous research
    – “There’s a lot of research on expert witness bias in psychopathy assessment.
    Could this be the case with IQ assessment too?”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Great question! Should we conduct a study?

A

Not yet! There’s a lot of research out there…someone may have already answered your
question!

  • Research literature = peer-reviewed articles in academic journals (or
    scholarly books)
    – Peer review: when other researchers evaluate your submission, give
    feedback, and make recommendation to the journal (accept, reject, revise)
  • Reviewing existing literature can…
    – Tell you if your research question has already been answered
    – Help you evaluate the interestingness of a research question
    – Give you ideas for how to conduct your own study – Tell you how your study fits into the existing research literature
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Okay…NOW can we do the study?

A

Not quite! Let’s check out that question again
* “Are expert witnesses in death penalty cases biased in favor of their
retaining party?”

– Is it interesting? Research should be important and relevant

 Life or death situations
 Assessments that should be reliable…but may not be

– Is it feasible? There needs to be a way to answer your question Many court cases are available through databases  They often list the scores an expert gives, and who retained that expert * Seems like a good question!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Developing a Hypothesis & Designing Your Study

A

So you have your question…

  • What do you think the answer will
    be, and why?
    – That’s your hypothesis!
  • How do you plan to answer your
    question?
    – That’s your study design!
    – Designing your study requires
    many considerations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Analysis & Conclusions

A
  • Research draws conclusions based on statistical
    analysis – Many kinds of analyses ⟶vary depending on
    hypothesis and study design
  • Statistics can provide a lot of nuanced information, – …but researchers tend to care most about probability * “What are the odds that these findings are only
    due to random chance?” – Ideally, these odds will be low
    – Low probability = very unlikely that the results had
    nothing to do with the independent variable
     More on variables in a later lecture
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Communicating Your Results

A
  • Usually done through publication
    – But can also be done through
    other means (e.g., posters,
    presentations) * And then the cycle begins again! – “What an interesting study on
    expert bias. I wonder if the results
    would be the same if experts were
    randomly assigned to defense or
    prosecution?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

The Beauty of Science

A
  • Obviously, the scientific method is a
    lot (and this was only an overview) * Conducting research isn’t for
    everyone…
    – …but understanding and respecting
    the scientific process is for everyone
  • Even for those that do love research,
    the process can be frustrating – But we accept the good with the bad
    – And there’s so much good!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What makes a topic interesting?

A
  1. Doubt – A research topic that is interesting
    reflects an uncertainty about a
    state of the world
    – By extension, the ‘answer’ for your
    research question should be one
    that has not yet been answered (in
    your exact way)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q
  1. Contribution to previous literature
A

– The results of your research topic should
address a gap in the research literature – The idea should be original in some form
 Overall concept  Population
 Design/method
 Type of analysis  Outcome – This is often where replications get left
behind (even though need for replication IS a
gap to address!)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
3. Practical Implications
3. Practical Implications – Research should be relevant to the real world (whether directly or indirectly)  “Why do we care?” – Implications could vary depending on factors like setting, population, etc.  What does society gain from your research?
26
Feasibility
Many interesting ideas are not feasible * Various factors can affect feasibility – Time – Money – Equipment and materials – Technical knowledge and skill – Access to research participants * For our class, we are limited by… – Questionnaires in survey (also a factor of time) – Level of statistical knowledge – Participants
27
* Research questions should be fairly specific – To guide you through this aspect, we need to learn some terminology * Variable: a factor that can vary/change in ways that can be observed, measured, and verified * Some variables can be directly manipulated and thus randomly assigned – You can randomly assign someone to an “exercise” condition – You cannot randomly assign age
* Research questions should be fairly specific – To guide you through this aspect, we need to learn some terminology * Variable: a factor that can vary/change in ways that can be observed, measured, and verified * Some variables can be directly manipulated and thus randomly assigned – You can randomly assign someone to an “exercise” condition – You cannot randomly assign age
28
* Some variables are used to predict other variables or outcomes – predictor variables (e.g., age, gender, SES) – Independent variable: a specific type of a predictor variable that is experimentally manipulated (e.g., type of shoe, dose of medicine) * Some variables are what we ultimately care about measuring – outcome variables (e.g., crime rate, adult IQ, reaction time) – Dependent variable: the outcome variable in an experiment (i.e., the variable that is theoretically influenced by the independent variable) * Extraneous variable: any variable besides the independent/dependent variable – Confounding variable: an extraneous variable that varies systematically with the independent variable ⟶confuses results!
* Some variables are used to predict other variables or outcomes – predictor variables (e.g., age, gender, SES) – Independent variable: a specific type of a predictor variable that is experimentally manipulated (e.g., type of shoe, dose of medicine) * Some variables are what we ultimately care about measuring – outcome variables (e.g., crime rate, adult IQ, reaction time) – Dependent variable: the outcome variable in an experiment (i.e., the variable that is theoretically influenced by the independent variable) * Extraneous variable: any variable besides the independent/dependent variable – Confounding variable: an extraneous variable that varies systematically with the independent variable ⟶confuses results!
29
Types of Variables: Categorical
* A categorical variable represents a characteristic or quality – Also called qualitative or discrete variables (these are synonyms and mean the same thing!)* There are three types of categorical variables: Nominal variables * Two or more categories * No intrinsic order * Examples: – Occupation – Nationality Dichotomous variables Type of nominal variable with exactly two categories * Examples: – Yes/No – Sex assigned at birth* Ordinal variables * Two or more categories* Can be ordered or ranked* Examples: – Socioeconomic status– Level of agreement
30
Types of Variables: Continuous
* A continuous variable represents a numerical quantity and can be measured along a continuum – Also called quantitative variables (this is a synonym and means the same thing!) * There are two types of continuous variables: Interval variables * Zero is not meaningful (or not defined) – Value of 0 does not mean there is none of that thing * Example: – Temperature (in F or C) ⟶0°F does not mean there is “no temperature” Ratio variables * Zero is meaningful and defined– Value of 0 indicates that there is none of that thing * Example: – Weight ⟶0 lbs means something has no weight
31
Operational Definitions
* To create variables, we must utilize operational definitions – = precise definition of a variable, including how it is measured * Example: “depression” is a different type of variable depending on how you operationalize it Type Operational Definition Measured Using…Dichotomous Meets criteria for any depressive disorder (yes/no) DSM-5 criteria Nominal Type of depression (MDD, PDD, PMDD, etc.) DSM-5 criteria Ordinal Severity of MDD (none, mild, moderate, severe) DSM-5 criteria and # of symptoms Ratio Level of depression Beck Depression Inventory (0-63)
32
Beyond Your Research Question
* When you ask a research question, you should have an educated guess about what the answer will be – This guess will be used to form your hypothesis * But first: where did that guess of yours come from?
33
Theory
* Theory: a coherent explanation or interpretation of one or more phenomena * Good theories: – Are based on existing knowledge – Generate new knowledge – Allow for hypothesis testing
34
Hypothesis
specific prediction about a new phenomenon that should be observed if a particular theory is accurate – If-Then relationship between theories and hypotheses “educated guess” ⟶theory-based prediction
35
Theory Testing
Researchers utilize the cyclical hypothetico-deductive method to test theories * Observation of phenomena – Triplett (1898): children completed a task faster in presence of others than when alone * Construction of a theory or utilization of an existing theory – Initial theory: people perform better in the presence of others * A hypothesis is created to predict future phenomena and tested while manipulating conditions – Results: sometimes people perform better…but sometimes they perform worse! * The theory is revised or modified – Zajonc (1965): presence of others increases arousal, which increases likelihood of dominant response
36
In research, there is always a trade-off…
…between internal validity (left button) and external validity (right button)
37
* Internal validity:
: the degree to which we can confidently infer a causal relationship between variables – Causal relationships are the extent to which changing one variable creates or leads to change in the other variable
38
External validity:
the degree to which we can generalize findings to circumstances or settings outside the lab (real-world environment) * What is more important? – Confirmed cause-and-effect when there is no interference (e.g., in a lab)? – Confirmed relationship despite interferences (i.e., in the real world), but no causal conclusion? – Can we ever have both?
39
Research Design vs. Data Collection Method These concepts overlap a lot, but they are distinct * Research design:
the overall structure of a study, such as…– Are you randomly assigning participants to groups? – Are you manipulating an independent variable? – What procedure will you use to gather information on your variables? (= data collection method) * In this lecture, we will discuss: – Data collection methods – Research designs – Most common data collection methods for each research design
40
Naturalistic Observation
* Naturalistic observation: noninvasive observation of individuals in their natural environments * Goal: detect behavior patterns that exist naturally—patterns that might not be apparent in a laboratory – High external validity (relatively generalizable ✅) – Allows researchers to study human behaviors that cannot ethically be manipulated in an experiment * Problems? – Low internal validity (cannot infer causation ❌) – Time- and resource-intensive – Potential observer effect – Multiple observers may record information differently  = issues with interrater reliability
41
Systematic Observation
* Systematic observation: control is exerted over the conditions under which the behavior is observed – Typically involves a controlled task to indicate the behavior of interest (e.g., speed, accuracy) – Often cognitive or biological in nature (e.g., memory accuracy, brain activity) * Typically yields high internal validity (may allow for causal inference ✅) – Situation is controlled – Often conducted in laboratory setting * Problems? – Low external validity (too controlled; not generalizable ❌) – Some behaviors can’t be easily measured using a controlled task
42
Field Experiment
Field Experiment Can have high external AND high internal validity Field Study ”Real world" environment outside the laboratory Controlled Conditions * Independent variable is manipulated * Extraneous variables are controlled as much as possible * Problems? – Time- and resourceintensive– Difficult to replicate– Still can’t control everything – Hard to record results – Potential ethical concerns
43
Case Study
* Case study: highly detailed examination and description of a single individual (or very small group) * Goal: generally used to investigate rare, unusual, or extreme conditions – Quite useful in clinical, neurological, and neuroscientific areas * Problems? – Low internal validity (cannot infer causation ❌) – AND low external validity (not generalizable ❌)
44
Survey Research
* Survey research: participants fill out surveys or questionnaires to gather information– Usually in self-report form, but can be submitted by an informant – May also be administered as an interview * Goal: investigate opinions, behaviors, or characteristics of a particular group– Often concepts that can’t be easily observed on a large scale * Problems? – Do people answer honestly?  One strategy is to ask the same question in different manners  Computer surveys may elicit more honesty – Validity varies  We’ll discuss more when we learn about Survey Research
45
Content Analysis of Archival Records
* Content analysis: type of archival data collection that involves analyzing something that has already been said (e.g., speech, interview) or written(e.g., book, article, court case) – No interaction between research subject and researcher (no risk to participants) – High external validity (relatively generalizable ✅) – A coding protocol (a.k.a. coding scheme) must be developed by researcher to determine how qualitative info will be interpreted and quantified for analysis * Problems? – Low internal validity (cannot infer causation ❌) – Resource intensive (e.g., need lots of research assistants) – Time-consuming (even with many RAs)
46
* Non-experimental research:
* Non-experimental research: researchers measure variables as they naturally occur (in the lab or real world) – Lacks random assignment or manipulation of an independent variable…– …so it has low internal validity… – …and therefore cannot lead to causal inferences * BUT it often has higher external validity (more generalizable) than experiments * Two main kinds of non-experimental research that we will discuss now– Nuances of non-experimental research will be covered in Unit 2
47
* Observational research
* Observational research refers to several different types of non-experimental methods in which behavior is carefully observed and recorded * Goal is usually to describe a variable or set of variables * Why is it non-experimental? – Nothing is manipulated or controlled… – …so it cannot lead to causal conclusions
48
* Correlational research:
* Correlational research: examines the relationships between multiple variables without manipulating them – Goal is usually to explain (or maybe predict) the relationship between variables * Positive correlation: as predictor variable increases, outcome variable also increases – Example: as activity level increases, muscle mass increases * Negative correlation: as predictor variable increases, outcome variable decreases – Example: as age increases, walking speed decreases
49
* Correlation coefficient:
* Correlation coefficient: numerical indication of magnitude and direction of the relationship between two variables – Can range from -1 to +1
50
* Illusory correlations:
* Illusory correlations: false association between any two variables – Let’s look at some examples of these!
51
Experimental Research
* Experimental research: a highly controlled procedure that involves manipulation of an independent variable and measurement of a dependent variable – In a true experiment, all potential confounding variables are controlled for or eliminated… – …so true experimental research CAN infer causal relationships! * This is ideal, but difficult (sometimes impossible) to achieve for certain research questions
52
Quasi-Experimental Research
* Sometimes random assignment is not possible for an independent variable (e.g., age)* Quasi-experimental research: a comparison is made, as in an experiment, but no random assignment of participants to groups occurs – Resembles experimental research, but is NOT truly experimental * Unlike non-experimental research, the independent variable is measured before the dependent variable ⟶ no directionality problem * But unlike true experiments, there is no random assignment ⟶possibility of confounds – Conclusions from quasi-experiments must be more tentative than conclusions from experiments – Cannot truly infer causation (can only estimate causation)
53
* Ethics:
* Ethics: branch of philosophy that is concerned with morality – Can also refer to a set of principles and practices that provide moral guidance in a particular field
54
Nuremberg Code (1947)
* In the Nuremberg trials, Nazi physicians were tried for shockingly cruel research on concentration camp prisoners during WWII * Led to the Nuremberg Code (1947), one of the earliest ethics codes – Provided a standard against which to compare the behavior of the men on trial * Particularly clear about: – Carefully weighing risks against benefits – Need for informed consent
55
The Declaration of Helsinki (1964)
* The Declaration of Helsinki was created by the World Medical Council in 1964 * Similar to the Nuremberg code, but added some standards – Research with human subjects should be based on a written protocol  = detailed description of the research – Protocol should be reviewed by an independent committee * Revised several times (most recently in 2004)
56
Tuskegee Syphilis Study
* In 1932, Public Health Service began working with Tuskegee Institute to study the course of syphilis * 600 Black men (399 with syphilis, 201 without) * Study projected to last 6 months, actually lasted 40 years * Penicillin available as frontline treatment in 1947—but participants denied access
57
Belmont Report (1978)
* Controversial Tuskegee Study led to Belmont Report (1978), a publication of federal guidelines for ethical research behavior – Justice: Distribute risks and benefits fairly across different groups at the societal level – Respect for persons: Acknowledges individuals’ autonomy  Protection for those with diminished autonomy (e.g., prisoners, children)  Informed consent – Beneficence: Maximize the benefits of research while minimizing harms to participants and society * Basis of the Federal Policy for the Protection of Human Subjects – = set of laws that apply to research conducted, supported, or regulated by the federal government
58
Institutional Review Boards
* Important aspect of that federal policy was the requirement of Institutional Review Boards (IRB) – Committee that reviews research protocols for potential ethical violations– 5 people from different backgrounds, differing sex/genders, scientists and nonscientists, as well as someone not affiliated with the institution* An IRB helps ensure that… – Risks of the proposed research are minimized – Benefits outweigh the risks – Research is carried out in a fair manner – Informed consent procedure is adequate
59
Major Issues in Conducting Human Research
Identify and minimize risks Protecting participants’ privacy Obtaining informed consent Minimize deception Weigh risks against benefits
60
Identifying Risks
* Every study has possible risks! – Risk can take multiple forms (e.g., physical, psychological, social) * “Does the study pose something more—in terms of magnitude or probability—than ‘every day’ risk?” – No = “minimal risk” – Yes = “at risk” * The greater the risk, the greater the obligation of the researcher to protect participants
61
Managing Risks to Privacy
* Three dimensions to participants’ right to privacy – Sensitivity of the information – Setting (public vs private) – Degree & manner of dissemination of results * Confidentiality is the minimum standard in terms of working with participants – = agreement to avoid disclosure of personal information without consent or legal authorization – E.g., therapist and client, priest and confession-seeker * Anonymity is ideal, where participants’ names and other personally identifiable information is not collected at all
62
Informed Consent
* Before conducting a study, researchers must decide whether informed consent is necessary according to APA Standard 8.05 – Usually, it is * Researcher obligations include: – Informing participants of all aspects of a study that may affect their willingness to participate – Responding to participant’s questions about the study – Indicating that participation is voluntary and allow subjects to terminate their involvement at anytime without penalty * Researchers should provide this information verbally and in a written consent form * Some individuals cannot provide consent – Individuals who have a developmental disability that would otherwise render them unable to competently consent to study procedures/responsibilities– Children (<18 years)  Assent* may be given in cases >~13 years of age – Individuals with certain forms of psychological issues * Consent must be given without undue inducement – Is the potential reward so amazing that a participant will consent despite their better judgment?
63
But wait! What about deception? * Deception
But wait! What about deception? * Deception may involve: – Omission (leaving information out) – Commission (adding false information) * Deception is ethically acceptable only if there is no way to answer your research question without it – Deceiving an individual inherently contradicts the practice of informed consent
64
If deception is necessary…
* It must be minimal * Subjects should not be deceived about any aspect of the study that would affect their willingness to participate * You must debrief participants afterward – = process of informing research participants as soon as possible about the purpose of the study – During debriefing, deception is revealed and harm is minimized
65
* Constructs
* Constructs are variables that are not directly measurable or cannot be directly observed. – Often representative of behavioral tendencies (not always what is currently occurring). – May include internal concepts like thoughts and physiological responses. – Often complex and multi-faceted
66
* Concepts
* Concepts describe what behaviors and processes compose the construct – Useful for depicting how the construct relates to other variables – May break construct down into more measurable components (indicators) – Can vary by researcher, lab, or research question
67
Measurement Hierarchy
* Constructs are defined at the highest level of organization – Emotional states (e.g., fear, love) – Abilities (e.g., IQ, memory) – Personality traits (e.g., neuroticism) * Concepts describe what behaviors and processes compose the construct – Smaller, more specific facets of an overarching idea
68
* Indicators
* Indicators denote an exclusive means of measuring a concept, relating to a construct – E.g., crying, depressed mood, negative self-view as indicators for depression– Variables are operationally defined at the indicator level  In other words, indicators are used to form operational definitions – Measurements can generally be categorized into:  Self-report  Behavioral  Physiological – Typically, multiple indicators are used to measure a single concept or construct
69
Converging Operations
* Few unitary (single) indicators can completely capture a construct or concept – Usually only present a single facet – Use of a single measure may result in consistently biased responding if the properties of that measure are not sound * Converging operations: using multiple operational definitions of the same construct – Allows scientists to draw conclusions within and across studies  Within: using multiple measures of the same construct to examine the pattern of results Across: examining results from correlational and experimental studies for similarities
70
Levels of Measurement
* Levels of measurement describe typs of information that can be communicated by a set of scores – Nominal Level – Ordinal Level – Interval Level – Ratio Level * Not directly related to types of variables
71
* The nominal level
* The nominal level of measurement is used for categorical variables and involves assigning scores that are category labels. – Are any of the individuals being measured the same or different in some defining way? – E.g. Marital status; generation; ethnicity; favorite color – No inherent order or rank among responses – Difference between one response or another are generally “equal” – Acceptable measure of central tendency: mode – Low-level of usefulness for many things = lowest level of measurement Mode is the only acceptable measure of central tendency at the nominal measurement level
72
The ordinal level
* The ordinal level of measurement is used for categorical variables and includes additional information regarding the scores. – Implies order (e.g. finishing place in a race) or rank (e.g. finishing place in a multi-modal competition; level of satisfaction) among responses or measures– Differences along the scale may not be equivalent to one another  E.g. Finishing time in a race; level of satisfaction – Can be used to compare variables having more or less of something– Acceptable measure of central tendency: median or mode
73
* The interval level
* The interval level involves assigning scores according to a numerical scale wherein the intervals are equivalent. – E.g. temperature scales; IQ – Do not have a true “zero” point (e.g. 0 degrees F).
74
* The ratio level
* The ratio level is similar, but the scales used have a “true zero” indicating an absence of the measured quality. – E.g. height; weight; count; exam score – Acceptable measure of central tendency for either interval or ratio scale: mean, median, or mode  Generally makes these two measurement levels the most desirable
75
* The ratio level involves
* The ratio level involves assigning scores according to a numerical scale wherein the intervals are equivalent AND there is a “true zero”. – E.g. height; weight; count; exam score – Provides the most information of the four levels partially because it includes much of the same information as the previous levels:  Category label (in numerical form)  Scores are inherently ordered  Two intervals on the scale are equivalent – Because there is a “zero”, ratios along the scale can be compared E.g. $2: $10:: $20: $100
76
Reliability:
Reliability: the consistency of a measure
77
Validity:
Validity: the extent to which a given instrument or tool accurately measures what it’s supposed to measure
78
Types of Reliability
* Psychologists consider three types of reliability: – Consistency over time ⟶test-retest reliability – Consistency across items ⟶internal consistency – Consistency across different researchers ⟶interrater reliability
79
Types of Reliability: Test-Retest
Some constructs are expected to be consistent (or “stable”) over time– Intelligence – Personality traits – Anxiety disorders – Personality disorders – Mood disorders – Emotions * Measures of stable constructs should have high test-retest reliability– Someone’s score at time 1 should be highly correlated with their score at time 2– A test-retest correlation of ≥.80 is generally considered good reliability Personality measures like the HEXACO, Big Five, and MMPI all have high test-retest reliability
80
Test-retest correlation
Test-retest correlation between two sets of scores of several college students on the Rosenberg Self-Esteem Scale, given two times a week apart
81
Types of Reliability: Internal Consistency
* Internal consistency: consistency of people’s responses across the items on a multiple-item measure – In theory, all the items on a unitary measure reflect the same underlying construct…– …so people’s scores on those items should be correlated with each other! * Examples – HEXACO (Agreeableness)  I rarely hold a grudge, even against people who have badly wronged me.  My attitude toward people who have treated me badly is "forgive and forget". – Rosenberg Self-Esteem Scale  I feel that I have a number of good qualities  I feel that I'm a person of worth, at least on an equal plane with others. * Note: consistent responding (on a measure with known internal consistency) * Like test-retest reliability, internal consistency can only be assessed by collecting and analyzing data * Different approaches to calculating internal consistency – Split-half correlation: split items into two sets (e.g., first and second half, even- and odd-numbered), compute score for each set of items, and calculate correlation coefficient between two sets of items – Cronbach’s α: mean of all possible split-half correlations for a set of items  Most common measure of internal consistency used by researchers in psychology Example, α for Emotionality scale is good (.83), but α for Greed subscale is not (.33) * Again, correlation of +.80 or greater is generally considered good internal consistency Split-half correlation between several college students’ scores on even-numbered and odd-numbered items of the Rosenberg Self-Esteem Scale
82
Internal Consistency & Factor Analysis
* If your measure has low internal consistency, first consider whether your items are too ambiguous * But wait—what if a survey measures multiple concepts, like a personality inventory? – Internal consistency across ALL items may not be good… – …but maybe that makes sense!  E.g., items about extraversion are not measuring the same thing as items about neuroticism * If your items are clear but your internal consistency is low, consider running a factor analysis – = a special kind of statistics used to group similar items together – That’s how measures end up with scales (e.g., HEXACO, CAMI)
83
Types of Reliability: Interrater Reliability
* RECALL: data collection techniques like behavioral coding may have problems with interrater reliability – = the extent to which different observers are consistent in their judgments – Often assessed using:  Cronbach’s α when judgments are continuous (quantitative)  Cohen’s κ when judgments are categorical (qualitative) * Observations that use qualitative coding (putting observations into discrete categories) are at particular risk of low interrater reliability * Researchers conduct reliability training to maximize interrater reliability– Training sets which are consistent with the principal investigator’s (PI’s) conceptualization of a concept
84
Validity
* Different forms of validity depending on what aspect of a construct you want to assess: – Subjective relevance ⟶face validity – Comprehensiveness ⟶content validity – Correlation to other variables ⟶criterion validity (more on this later) – Lack of similarity to irrelevant variables ⟶discriminant validity
85
Types of Validity: Face Validity
* Face validity: the extent to which a measurement method appears “on its face” to measure the construct of interest – E.g., a questionnaire relating to depression should have questions that ask about mood and sadness – This is a “subjective” validity usually accomplished by expert ratings * Face validity is very weak evidence that a measurement method is measuring what it is supposed to – Based on people’s intuitions about human behavior (frequently wrong) – Sometimes we don’t WANT face validity  E.g., Psychopathic Personality Inventory (PPI; Lilienfeld & Andrews, 1996)
86
Content Validity
* Content validity: extent to which a measure “covers” the construct of interest * Example: – If test anxiety = sympathetic nervous system activation (leading to nervous feelings) + negative thoughts, then measure should include items about both
87
Criterion Validity
* Criterion validity: extent to which people’s scores on a measure are correlated with other variables (known as criteria) that one would expect them to be correlated with – E.g., test anxiety scores with exam performance (negative) and blood pressure during exam (positive) When the criterion is measured at the same time as the construct, criterion validity is referred to as concurrent validityWhen the criterion is measured at some point in the future (after the construct has been measured), criterion validity is referred to as predictive validity
88
Convergent & Discriminant Validity
* When we deal with predictive/concurrent validity, we’re usually talking about constructs that are different (but related) to the one we’re measuring * Criteria can also include other measures of the same construct – Convergent validity: how well does a measure relate to other measures which measure the same construct?  E.g., correlation between Beck Depression Inventory and Patient Health Questionnaire 9 * Also important: discriminant validity – = the extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct
89
Why learn about sampling methods?
* Suppose you read in a magazine that 98% of women who responded to a survey reported being dissatisfied with their marriage and 75% reported having affairs. – Do you buy this? What questions do you have? * One of the first questions you should ask when evaluating the quality of “science” is: “Where is the researcher getting their information?”– Luckily, this is usually a pretty easy question to answer, even if you get the information from a popular media article
90
Populations
Research questions are usually asked about large groups What motivates men to exhibit parental investment? What experiences do bisexual individuals have within the LGBTQ+ community? How can we support career development among young adult immigrants who are undocumented? * Population: all members of an identified group that a researcher is interested in – Research questions are asked at the population level…– …but is it possible to collect data on every member of a population? Usually, no
91
Population ⟶ Sample
* To feasibly conduct a study, a researcher has to select a samplefrom their target population – = a subset of a population chosen to be part of a study * Size of sample depends on various factors (e.g., funding, research design)– Systematic observation often has smaller samples than survey studies * Regardless of size, the goal of sampling is to choose individuals who will represent the entire population! * Sampling error: any difference in the observations between the sample and the population – All research studies have some amount of sampling error – One way to reduce sampling error is to use a sampling method that yields the most representative sample
92
Sampling Methods: Convenience Sample
So—how do we pick who is included in our sample? * We could just grab whoever is closest, like this * Convenience sample: participants are selected because of their convenient accessibility and/or proximity to the researcher * Convenience samples are okay if demographics minimally influential in research design (e.g., perception, memory) * Benefit: allows researchers to obtain a sample more easily when choosing at random from the population is not possible or necessary* Drawbacks: increases the amount of sampling error, which…– Lowers internal validity  Certain participant characteristics may be confounding variables – Lowers external validity  Less likely to be a good representation of the population  More difficult to generalize the results of the study to the population
93
Sampling Methods: Probability Samples
* In contrast, probability samples can reduce the amount of sampling error that exists in a study – = individuals are chosen for sample based on a specific probability – Important to use a probability sample when sampling error is likely to be large * Three main types of probability samples: – Simple random samples – Cluster samples – – Stratified random samples
94
Sampling Methods: Simple Random Sample
* Simple random sample: uses some procedure to ensure all members of population have an equal chance to be selected * Benefit: truly random, so it theoretically should be representative * Drawbacks: – Often difficult to obtain – Random isn’t always representative ⟶ samples can be uneven just by chance alone!
95
Sampling Methods: Cluster Sample
* Cluster sample: – Clusters of individuals are identified – Subset of the clusters is chosen to sample from – Individuals randomly selected within each cluster * Benefits: – Probability sample (vs. convenience) – Easier than simple random sample * Drawback: May still over- or underrepresent a part of population
96
Sampling Methods: Stratified Random Sample
* Stratified random sample (a.k.a. representative sample): participants are chosen more carefully to accurately reflect similar characteristics to the targeted population * Benefit: definitely representative of controlled characteristics * Drawbacks: – Difficult – May not represent population in terms of characteristics other than those that were intentionally selected (The smaller sub-groups are called strata)
97
Data Analysis
* Once a study’s data has been collected, it must be analyzed before we can draw any conclusions – An entire unit of this class will be dedicated to conducting analysis, but today we’ll overview the broad concepts * Two main kinds of statistics: – Descriptive statistics: describe different aspects of data, which may be used to organize the information – Inferential statistics: infer and draw conclusions about a population based on a sample
98
Descriptive Statistics: Key Terms
Descriptive statistics: describe different aspects of data, which may be used to organize the information * Percentages * Measures of central tendency: describe the typical, average and center of a distribution of scores – Mode: most frequently occurring score in a distribution – Median: midpoint of a distribution of scores – Mean (M or µ): average of a distribution of scores * Measures of dispersion: describe the degree of spread in a set of scores– Range: distance between highest and lowest scores in a distribution– Standard deviation (SD or σ): average distance of scores from the mean– Variance: SD squared * Correlation coefficients: strength and direction of relationship between two variables – Also can be used an inferential statistic!
99
Inferential Statistics: Key Terms
Inferential statistics: infer and draw conclusions about a population based on a sample * Inferential statistics are used to determine whether effects are statistically significant – = effect is unlikely due to random chance, therefore likely represents a real effect in the population – The percent chance that results are due to random error is called a p-value * Researchers set a threshold for how much chance they’re willing to take that their results are due to random error – This threshold is called α (alpha)
100
Drawing Conclusions: Rule #1
Rule #1: Assume there is no effect or relationship until evidence suggests otherwise* Research begins with two competing hypotheses… Null hypothesis (H0 ) = there is NO effect or relationship of the thing you’re studying in the population Basically, the assumed state of the world “Null is dull” There are NOT racial differences in scores on the Implicit Association Task (IAT) OR: There is NO relationship between race and IAT score Research hypothesis (H1) (a.k.a. alternative hypothesis) = there IS an effect or relationship […] Basically, whatever you think will happenWhen we talk about “forming a hypothesis,” this is the one we mean There are racial differences in scores on the Implicit Association Task (IAT) OR: There is a relationship between race and IAT score
101
Drawing Conclusions: Rule #2
Rule #2: scientists cannot prove hypotheses or theories * Remember, science is all about probability, and there is always chance of error – However, hypotheses/theories can gain various levels of support * Example: – DON’T say: “Our results prove that there are racial differences in IAT scores”– DO say: “Our results support our hypothesis that there are racial differences in IAT scores” * If multiple studies find the same thing, the level of support for a hypothesis/theory increases
102
Drawing Conclusions: Rule #3
Rule #3: Because nothing can be proven, conclusions are never final * RECALL: science is a process, not a product – Research is conducted across settings and varying levels of experimental control to update probabilities regarding a phenomenon * Example – “There are racial differences on IAT scores in an American sample…– …but would this be the case in the Netherlands?” – (Maybe culture matters as much as race in terms of implicit bias)