Thinking and reasoning Flashcards Preview

PSY1207 Cognition, emotion and development > Thinking and reasoning > Flashcards

Flashcards in Thinking and reasoning Deck (22)
Loading flashcards...

what does thinking inc?

deductive reasoning
solving logical/mathematical problems that have right answers (from premises, generate valid conclusion)
or evaluate validity of conclusion

inductive reasoning (predicting future from past data)
stat generalisations, probabilistic judgements, predictions
hyp-testing, rule induction

working out how to get from state A to B (numerous solutions, varying degrees of constraint)

judgement and decision-making: Choosing among options

creative thinking, daydreaming, imagining, etc


what is the research on thinking?

focused primarily on cases where
there is right answer, +/
way of evaluating rationality of answer, and/
way of assessing efficiency with which one gets there

as well as asking
how do people think (what are processes and representations)?

strong emphasis on human imperfection
why are people apparently irrational in thinking?
what limits efficiency of thinking (relative to ideal thinker)?

practical motivation for focus:
practical importance of fallibility of medical, legal, military, etc. decision-making and
possible remediation with training and/ IT support
allow for it in system design, and attempts to change behaviour


what is the general dual-process theory of reasoning, problem-solving and decision making?

use 2 kinds of process:

‘system 2’ – slow, chunky, sequential, effortful 0 but rational, logical, general-purpose – conscious reasoning system
constrained by limited WM capacity and other basic limitations of cog machinery

‘system 1’ – intuitive, automatic, largely unconscious, fast-and-frugal, quick and dirty, approx. – but domain-specific – procedures, schemas, rules of thumb/heuristics, that
are adaptive and mostly effective when applied in appropriate domain, but
only approx. – with some built in biases
may lead to error if applied to inappropriate domain


judgements of prob/freq

some facts about freq told to use/can be looked up:
E.g. Lifetime morbid risk of sz = 0.7%

but many judgements of prob/freq we make based on experience, e.g.
will it rain today?
if I get a train to Paddington, how likely is it to be late?


availability in memory

availability heuristic: Judge as more probable/freq events/objects of which more examples readily ‘available’ – in memory/env (Tversky and Kahneman, 1973)

works because generally easier to retrieve from memory examples of events/objects that are more freq

unfortunately, retrievability also determined by other factors:
similarity to current state

hence tend to over-estimate prob of events of which know examples that easily retrievable – e.g. because recent, personally salient/similar to present instance – availability bias


examples of availability bias

screening of ‘Jaws’ caused drop in number of people swimming off coast Cali

drivers tend to slow down – for a while – after seeing accident/police car

people tend to
overestimate risks of dying of rarer causes but underestimate risks of dying of common causes (Slovic et al., 1980)
unreasonably fearful about children being murdered in modern Britain, as to, compared to being run over


neglect of base rate and representativeness bias

when evaluate particular cases:
tend to ignore imp source of info:
knowledge of ‘base rates’

overall freqs of particular classes of event
if something/someone has features representative of being X, tend to think they have standard properties of X
may be basis for best guess about category member in absence of other info, but biases us to attribute prototypical properties even when have other info


representativeness bias and sequential events

have difficulty ignoring representativeness (/unusualness) of sequence and focusing on what we know about the probabilities of indv events


functional fixedness in problem-solving

classic exps of Gestalt psychs
E.g. Duncker (1945) asked subjects to find way of supporting lighted candle on vertical wooden wall, given props
less successful than subjects given same problem but with drawing pins tipped out of box


Conservation and confirmation bias in inductive reasoning

in ordinary life and scientific research, try to come up with rule/principle to describe instances have experienced, and test hypothesised rule against further observations

Wason’s (1960) 2 4 6 exp:
‘this sequence – 2, 4, 6 – generated by rule – have to try to guess rule, by trying out other sequences’
P then has to generate further sequences of 3 numbers, receiving feedback: ‘yes: fits the rule’/’no: doesn’t fit’
and declare his/her hyps about what rule is

Ps tended to offer over-specific hyps, e.g. ‘The numbers increase by steps of 2’ and
(a) are reluctant to abandon hyps (conservative)
(b) tend to seek confirmatory rather than disconfirmatory evidence

scientists just as prone


problem solving

problem solving research studies situs where there is start state and goal state and have to get to goal as quickly as possible, using set of avail operators and subject to certain constraints
missionaries and cannibals
Luchins’ water-jug problems
starting with full 8-pint jug, an empty 5-pint jug, and an empty 3-pint jug, end up with exactly 4 pints of water in largest jug
tower of Hanoi


the problem space (Newell and Simon, 1972)

if problem is soluble, at least one path through state space between start and goal states

problem-solver must search for operators that will:
move him/herself through intermediate states on path approaching goal
avoid need for backing up from dead-ends/going around in circles
minimise path length

w/o knowing in advance what optimal path is/what intermediate states will be transversed


WM capacity limits and heuristics in problem-solving

given huge ‘workspace’ and time could exhaustively enumerate all possible legal ‘moves’ and pick shortest path

but don’t have WM capacity for this: Hence must
recognise familiar patterns and retrieve previously effective moves from LTM
hunt for way between initial and goal states in small steps, using heuristics such as mean-end analysis and don’t-repeat-a-move-if-possible

means-end analysis: Pick general means for reaching goal: If that means not yet available, create sub-goal of achieving means until one generated sub-goal that can be satisfied by available operator
requires maintenance of ‘goal stack’ in WM


reprise: Design limitations intrinsic to cog machinery

‘design limitations’ in cog capacities (properties of memory, retrieval, limited WM, difficulty in attending to relevant info, difficulty in shifting cog ‘set’ and general effortfulness of sequential reasoning:)
lead to reliance of heuristics (approx. rules of thumb)
result in intrinsic biases when apply heuristics


mental models and syllogistic reasoning

do we reason with mental version of formal logic?

no! – given set of premises, imagine one/more possible concrete worlds in which premises true – mental models

then generate conclusion/determine whether conclusion offered valid, by examining mental model(s)

errors arise through:
failure to generate all possible mental models for premises
lack of WM capacity for maintaining multiple models

if we construct only first mental model, and find it matches conclusion, think inference is valid

but isn’t: Second model also describes state of affairs consistent with premises, in which conclusion is false


reprise: Abstract v concrete reasoning

can and do imagine concrete scenarios to assist reasoning (mental models)


ability to do this limited by representational capacity of WM

may fail to consider all possible scenarios


trouble with IF-THEN (conditioned propositions)

Wason’s 4-card problem:

all cards have letter on one side and number on other
which 2 cards have to be turned over to check whether following rule true:
if a card has a vowel on one side, then it has an odd number on the other side – A, B, 1, 2
student subjects always choose A but also 1 rather than 2


why are the A and 2 cards the right answer?

if P(vowel) then Q(odd)

truth table:

for logical ‘if P then Q’, only combo inconsistent with rule is P and not Q

should check P(A) and not Q(2) cards

logical ‘if P then Q’ says nothing about what is case ‘if Q’

no point checking Q(1) card


so are people just illogical?

no – changing content and context of problems w/o changing formal structure leads to dramatic improvements in perf

E.g. Griggs and Cox (1982) – imagine police officer observing drinkers in bar – for which 2 kinds drinker do you need to check age/drink, to detect transgression of rule
if person drinking beer, then he/she must be over 18 – drinking beer, drinking coke, 22 years old, 16 years old

75% correct with this version, typically 10-20% with abstract version

Cheng and Holyoak (1985) used formally identical problem involving form with words transit/entering on one side and list of diseases on other – had to check for observance of rule:
if form has entering on one side, then other includes cholera among list of diseases

half subjects given rationale for rule (mere transit passengers don’t need cholera inoculation, visitors to country do)
w/o rationale, perf poor (60%)
w/ rationale, subjects perf well (90%)

so if already familiar with/given ‘social rules and permissions’ context for rule, perf good


domain-specific IF-THEN - deontic reasoning

why does perf improve with concrete contexts?

Cheng and Holyoak: Successful conditions engage familiar ‘permission’ schema, for social rules about what ought to happen in certain circumstances
if (you want to) P, then (you must) Q

deontic IF-THEN happens to have same truth conditions as logical IF-THEN


causal reasoning

why do people make characteristic error, i.e. choose Q alternative, esp. in abstract versions of problem

causal, probabilistic, IF-THEN:
Oaksford and Chater (1994) argue subjects’ choices rational under interp, which doesn’t have same truth conditions as logical IF-THEN

E.g. clouds cause rain, so logically
if (there is) rain, then (there must be) clouds
if proposition interpreted as making causal/correlational chain, clouds also imply some probability of rain

so if clouds (Q), reasonable to check if rain (P) to collect info about strength of r’ship


why is thinking errro-prone?

‘design limitations’ in cog capacities (properties of memory, retrieval, limited WM, difficulty in attending to relevant info, difficulty in shifting cog ‘set’)
lead to reliance on heuristics (approx. rules of thumb)
result in intrinsic biases when we apply heuristics

habit of reasoning with concrete mental models couples with failure to generate/inability to represent all possible mental models

‘capture’ of reasoning by automatic domain-specific heuristics that are:
adaptive in right context
inappropriate for case at hand