Chapter 2 Flashcards
(38 cards)
What is the difference between intuitive and analytical thinking?
Intuitive thinking: First impressions are at times surprisingly accurate: quick and reflexive, that doesn’t require much mental effort
Analytical thinking: slow and reflective, takes mental effort
What is random selection and what are the factors that define it?
Random selection: The Key to Generalizability
Identify a representative sample of the population and administer a survey
- Every person in the population has an equal chance of being chosen to participate
- Crucial if we want to generalize our results to broader population
- Obtaining a small sample tends to be more accurate
What are the two main factors that determine our evaluating measures?
Evaluating measures: when we measure our dependent variable we need to ask whether if the measure is reliable and valid
Reliability: consistency of measurements
Reliable questionnaire should yield similar scores over time (test-reterst reliability)
- Interrater reliability: extent to which different people who conduct an interview agree on the characteristics they’re measuring (diagnosis of schizophrenia while the other diagnoses depression - non reliable)
Validity: extent to which a measure assesses what it claims to measure
What is the relationship between reliability and validity?
→ Reliability is necessary for validity (we need to measure something consistently, before we can measure it well) , but reliability doesn’t guarantee validity
What is the difference between replicability and reproducibility?
Replicability: ability to duplicate the original findings consistently
Reproducability: ability to review and reanalyze the data from a study and find exactly the same results (repeating the same statistical analysis of already collected data)
How do we face the replicability crisis? (hint:5)
- Share research materials and datasets in publicly accessible research archives, inviting others to reanalyze their data
- Conduct replication replications of their own and others’ work
- Preregistration, publicly posting hypotheses, research designs, and plans for analyzing reporting results prior to data collection
- Encourage editors to publish all research that’s been carefully conducted, regardless of whether it supports a theory
- Place less emphasis on findings of single studies
What is the difference between external and internal validity?
EXTERNAL VALIDITY ( extent to which we can generalize our findings to real world settings)
INTERNAL VALIDITY (extent to which we can draw cause-and-effect interferences)
What is the difference between self-report measures and surveys? What are some advantages and disadvantages of these measuring tools?
Self-report measures: ask them directly (questionnaires to assess personality traits, mental ilnesses, and interests)
Surveys: measure opinions and attitudes
What are rating tools and some of its disadvantages?
→ Alternative to asking people about themselves, is asking others who know them well to rate them
What is a variable and what correlation mean?
Variable: anything that can be measured and varies across individuals
Correlate: 2 things relate to each other (statistically)
What are the 3 types of correlations?
Positive: as the value of one variable changes, the other does as well (up:up or down:down)
Zero: variables don’t go together at all; one variable changes, the other won’t
Negative: as the value of one variable changes, the other goes in the opposite direction (up:down or down:up)
What are the correlation coefficients in a correlational design?
-1: negative correlation (perfect, all dots (individuals of the study) align with the slope)
+1: positive correlation (not perfect, some individuals don’t follow the trend)
<1: less-than-perfect correlation coefficient (to find out how strong a correlation coefficient is, we need to look at the absolute value)
The strength of the correlation (how close the absolute value is to 1) tells us how effectively one variable predicts the other
What is an illusory correlation?
Illusory correlation: perception of a statistical association between 2 variables where none exists
Regardless…
→ We tend to pay too much attention to the first → fits what we expect to see = confirmation bias kicks in + we tend to remember instances that are most dramatic (come easily to mind)
We aren’t good at remembering nonevents
What are the components of an experimental design?
What are the 2 designs that an experimental design can adopt?
Experimenter randomly sorts participants into one of two groups (canceling out pre-existing differences)
- Between-subjects design: one group will be randomly assigned to receive some level of the independent variable, while another will be assigned to the control condition
- Within-subject design: researcher takes a measurement before the independent variable manipulation and measure the same participant after the variable manipulation
What are the 2 groups into which the researcher sorts the participants into?
Experimental group: group that receives manipulation (take new drug)
Control group: group that doesn’t receive manipulation (no drug - sugar pill - placebo)
What is the difference between random selection and random assignment?
Random selection deals with how we initially choose our participants, whereas random assignment is how we assign our participants after we’ve already chosen them
What is the difference between independent and dependent variable?
Independent variable: variable the experimenter manipulates
Dependent variable: variable the experimenter measures to see whether the manipulation has had an effect
What is a placebo effect?
Improvement resulting from the expectation of improvement (patients may improve since they knew they were receiving treatment)
3 neurotransmitter systems involved in the placebo effect: cannabinoids, dopamine, and opioids
What is the nocebo effect?
Harm resulting from expectation of harm
What is the experimenter expectancy effect?
Researchers’ hypotheses leads them to unintentionally bias the outcome of a study (researchers’ end up falling pray to confirmation bias by seeming to find evidence for their hypotheses even when this is wrong)
→ Corrected by adopting a double-blind (neither the researcher nor participants know who’s in the experimental or control group)
→ People can (without knowing) give off cues that affect a subject’s behaviours ( math horse and bright rat example)
What are demand characteristics?
Participants can pick up cues from an experiment that allows them to generate guesses regarding the experimenter’s hypotheses
→ When participants think they know how the experimenter wants them to act, they may alter their behaviour accordingly
→ Corrected by researchers disguising the purposes of the study (cover story) that differs from the investigator’s actual purpose - distractor tasks or filler items
What is a confound?
Any variable other than the IV that differed between the experimental and control groups
→ In order for an experiment to possess internal validity, the independent variable must be the only difference between the experimental and control groups (otherwise we can’t know if the IV exerted an effect on the DV)
What are the 3 main ethical guidelines a researcher must follow when performing human research?
When is deception justified?
Deception is justified only when:
- Researchers couldn’t have performed the study without the deception
- The use of deception or withholding the hypothesis does not negatively affect the rights of the participant
- The research does not involve medical or therapeutic intervention