lecture 24 (reasoning) Flashcards
(16 cards)
intrepertation bias?
- bias towards interpretations that favour a researcher’s theory, both when getting significant results and when not
- identification bias in which part is false (theory or assumptions)
- increases error rates
- e.g.: As measure A is apparently a better measure of attention in this context, and B is less useful, there is some support for Enclothed cognition
two different frameworks in research findings in psychology?
- method relevant beliefs (whatever you consider to be true about proper measurement)
- theory relevant beliefs (whatever you consider to be true about psychology/ human behavior)
centrality and being peripheral?
- centrality of belief: belief on which many other beliefs depend -> peripheral is the opposite of being central
- a problem in psychology is that method-relevant beliefs are too peripheral and theory-relevant beliefs are too central (by this they mean that theories are often indistinguishable from very general assumptions about human behaviour)
- centrality of theory related beliefs + methods that are highly peripheral/easy to discard = higher likelihood of interpretation bias
conservatism?
- the more central method-relevant beliefs, the more we are forced to be conservative in the interpretation of a study
- Preference for the interpretation that keeps established knowledge structures intact as much as possible
- this constrains the field of alternative explanations and so makes empirical tests more diagnostic
fundamental problems in model research practice (MRP)?
- overemphasis on conceptual replication
- NHST implementation problems
- lacking attention given to verifying measurement instruments’ and experiments’ integrity
overemphasis on conceptual replications?
- weakens method-relevant beliefs and contributes to the constant theoretical advancement in MRP, however with faulty results
- leaves it ambiguous whether the results are due to theory-, or methodology-related circumstances
- Failure to produce significant results
constitute failed pilot studies that end up in the file drawer, which increases the type I error in general and furthers publication bias
Problematic implementation of NHST?
- all tyhe problems also listed in lecture 23 (false dillema/straw man fallacy…)
- encourages one-sided views on theories instead of more nuanced ones
Not verifying the integrity of measurement instruments?
- not including reliability measures, not doing manipulation checks, or not replicating known effects
- weakens method-relevant beliefs
- Psychological processes are context sensitive which makes validation of psychological measurement very difficult
Surveys?
- more direct observable method
- inherently subjective: interpretation and phrasing might differ between respondents
- survey measures are context dependent: social desirability bias, order of the questions, vagueness and ambiguity of language, etc
- young children’s self-reports, especially with parental help, may be inaccurate due to limited understanding and parental influence on responses
- reliability and validity of surveys not necessarily of how well questions measure construct, but how well questions within a survey correlate to eachother
conflation?
- assuming you are measuring two different things that correlate but you are actually measuring the same construct or overlapping constructs
- e.g. children who are better at counting are better at reporting their physical activity levls because that involves counting
Implicit association test (IAT)?
- design to surprass subjectiveness and social desirability bias of surveys
- IAT scores reflect differences in average reaction times
validity and reliability of IAT?
- predictive validity: IAT scores are weak/moderate predictor of discriminatory behavior
- construct validity: lack of consistency with explicit measures of prejudice -> If an IAT measures implicit attitudes, why do most men not exhibit “implicit sexism”?
- test-retest reliability: IAT scores turn out to be a poor predictor of future scores by the same individuals on the same test
issue of arbitrary metrics?
- assumes that response times in milliseconds directly reflect implicit biases without linking scores to meaningful behaviors
- meter reading: assuming that raw scores on a psychological metric represent a direct position on a psychological dimension (does 0 actually equal neutrality?)
- if you want to give a diagnostic interpretation to people’s score you need to look beyond just the positive correlation -> you should look at behavior neutrality (which in IAT equals 0.5)
- norming: Transforming raw scores into standardized scores or percentiles, which still may not reflect real-world meaning
big take away implicit measures?
- Implicit measures (like the IAT) do not necessarily reflect our ‘true’ preferences
- Measuring blood oxygenation isn’t
strictly speaking a more ‘direct’ measurement of psychological states than reaction time measures or even survey questions
strategies for improving MRP?
- Direct replications should be emphasized more than conceptual replications (this ensures stronger underlying methodology)
- integrity of manipulations and measurements should be verified
- Null hypotheses should not be framed as no difference but should indicate a direction so that stronger claims can be made based on the results
what can we conclude if we compare fMRI measurements in research into certain psychological states with measurements based on self-report questions about the same psychological states?
Both types of measurement of the same psychological state can only be valid if they are correlated