Kennipart 2 Flashcards
(27 cards)
Judgment and decision definition
Calculating likelihood of events using the incomplete inform, calculating probabilities. Different from a decision- actively choosing from a number of possible actions. Judgement is calc the probabilities of each option, don’t know exact likelihood
Disease diagnosis terms
Hit- + result if you actually have the disease. Some tests are more accurate than others: low or high sensitivity. Miss is a - result if you have the disease. Correct rejection is a - result when you don’t have the disease. Specific if you have correct rejection more than false alarm. False alarm is + result if you don’t have the disease. Trade off between sensitive and specificity
Base rate prevalence
E.g. if 50% of pop has disease but test is 95% accurate, 5% false alarms and 5% misses. If 1% have disease is 95% accurate, one has the hint, most have correct rejection but several have false alarm. + doesn’t mean disease as low prevalence
Bayes theorem - definitions
What rational person should do with information. Probability of event based on current info and priori beliefs.- update beliefs based on new evidence and context/base rate should affect. Priori is the probability of having base rate/ disease/ pH is prob hyp is correct- belief before evidence. Likelihood is the probability of test result being +/- given that you have the disease- pDH is probability of data given the hypothesis . Posterior is the belief you have the disease given the test result. pHD probability of hyp given the data
Equations - look them up
Odds ratio is prob of having disease vs not having it. E.g. if 1/100 have disease (base rate), 1 has it, 99 don’t so prior is 1/99. Likelihood: if test 95% sensitive, + if you do have disease it it’s 0.95, FA 5% so 0.95 / 0.05=19. To get posterior multiply prior and likelihood so 1/99 x 19 is 16%. If result -, CR 0.95, miss 0.05 so likely still 19. Posterior is 19 x 99/1=1881: - result more trusted
Base rate neglect
Kahneman tversky 73: out of 100 people, told 70 lawyers and 30 engineers, or the other way around. Then told about a man, who is married,m conservative, not political, likes numbers and asked if he was engineer or lawyer, 90% said engineer regardless of split. But surely if told 0 were engineers, no one would say so there’s a limit. Cascells 78: had medical students or staff from Harvard, asked if a test for a disease w 1/1000 prevalence has false alarm of 5%, what is the chance a + actually means disease. Answer is 2% but 45% said 95%, ignored the prevalence
Conjunction fallacy
Taversky and kannneman 83: told about linda, bright, outspoken, goes to protests asked if she was a bank teller or a bank teller in feminist movement. Many chose 2 even tho evidence, should chose 1 as higher chance of just being a bank teller. Conjunction fallacy is assumption probability of 2 events is higher than 1. Ppl value specificity e.g. like statement more bc it’s specific. We may ignore redundant info e..g as bank teller is in both
Optimal judgment
Human judgment not optimal due to BRN AND CF but including info about cause improves performance- krynski and tenebaum e.g. chance a and b results mixed up.
Resenting frequencies improves e.g. 1 out of 20. Personal relevance increases e.g. do you have disease x. Bus these still don’t make it 200%
Heuristics
Representative: assume object belongs to a category as representative e.g. bank teller. Availability is freq of event estimated by ease of retrieval . Liechtenstein 78: publicised causes of death estimated to be higher than not e.g. murder over suicide. Pachir 2012: 3 drivers: direct experiences, emotional response and media coverage. Can be reversed oppenheimer 2004:when asked which surname most common- famous or not, say not famous. W: vaguely defined, doesn’t define when spec ones used and not biased but poor information, list of heuristics doesn’t mean theory’s
Theories
Dual process theory- kahneman: first is fast and automatic but 2 is slow and effortful. 1 is heuristic, prone to errors. Cog misers, use 2 to get correct answer but often use 1 as easier. 2 also has errors, doesn’t explain why, ill defined, hard to predict when used
Fast and frugal- gigerenzer
How are we so smart in real life. Use heuristics as pretty accurate and don’t have time. Take the best and ignore the rest- ask which city is bigger, use search rule e.g. do I know this city, then other cues like airport. Stopping rule as of know 1, stop search. Decision rule: chose the one you know. Search is serial, once find cue, stop. Recognition heuristic, select object you know
Goldstein and gigerenzer 2002
Us students chosose between 2 German cities , chose the one they recognised 90%. Pachir 2012, recognition used for which city bigger but not which is further north, over all questions, correlation between validity and usage 0.64. Richtner and spath 2006: German students chose city based on recogniton 98% of the time when it had international airport , even when don’t know the cities.
Chose 82% when other city had international airport, even if didn’t know it. But not always used, only when info valid, no definition of validity. Don’t know if serial processing
Tutorial
EVR had reduced medial pfc damage, normal iq but personality change as acted inappropriately and couldn’t make decisions. Somatic marker hypothesis is that we unconc simulate all outcomes of decisions, body reacts which causes gut feeling/body markers and allocates attention. Rough way but not always right. In the medial pfc, ps w damage have no scr response to emotional stim but do have orienting response. Iowa gambling task- decks good or bad (high risk or low) normal switch to low but damaged don’t
Utility theory
Assume people act rationally to max expected utility= p(given outcome) x utliilty of outcome. U is the subjective value we attach- irrational. We calc u for each outcome and choose greatest. Starting point irrelevant, emphasis on final amount . E.g. people choose safe option like 100% win 500 over 50/50 win loose even if more
Prospect theory
Better descriptive account than utility. Kahneman and tversky 84: assume people id reference point, rep current state with starting, people more sensitive to potential losses than gains (loss aversion), ppl overweight rare events e.g. head is 200 tail is -100 ppl refuse. You can calc a blue function, link between until its and objective value- morenloos averse
Framing effect
Tversky and kahneman 81: us preparing for Asian disease, gain frame is 200ppl saved or 1 in 3 chance 600 saved- most choose first even tho both same number in end. Loss frame- 400 will die or 1 in 3 prob no on e will and 2 in 3 600 will- ppl chose second as loss Evers and overweight that no one will die
Framing effect weaknesses
Mandel and vartanian 2011: ambiguity as ppl may think at least 200 saved, when progra,s described, effect disappears. Almashat 2008: when doing medical scenarios w cancer, when ps list s/w and justify, framing goes (possible 2 systems where you deliberately weight?). Still here in world e.g. reducing voting age got 37% but giving young ppl the vote got 52%
Sunk cost effect
Ppl pursue goal after being shown to be suboptimal as resources invested. Dawes 88: deposit for hotel, you get ill but go anahway. But w: balints and Ely 2011: when gave mba info about investments, they showed opposite of sunk cost e.g. more likely to switch
Overweighting rare event
Explains lottery. Hertwig 2004: when based on descriptions of real world, hen based on past experiences, underweight. Due to low sampling e.g. never experienced rare event, link to availability heuristic
Loss neutrality
Either 50% 1, 50% -1 or 50% 5, 50% -5. Prospect thoery predicts loss aversion so should pick one but actually there’s a 5050split, breaks down with extreme amounts tho
Utility vs prospect theory
P more detailed, both assume value function, non linear mapping but prospect make more assumptions. P predicts wider range of outcomes due to loss aversion, overweighting, reference point. W: don’t explain why value function exists, effects can be reversed, no ind diffs e.g. high self esteem prefer risk, high narc sensitive to reward, low to loss
Emotional and social factors
Anticipated loss leads to - emotions. Kernel 2006: -3 loss greater impact than +5 gain, impact bias as overestimate intensity of - emotions due to loss. Giorgetta 2013: decisions make by p or computer felt regret if made by p but disappointment if by computer
Agency and omission bias
Prefer to do nothing instead of taking risk. Brown 2010: parents more willing to accept disease than risk of harming child via vaccine. Wroe 2005: high anticipated responsibility and regret if child has reaction to vaccine. Aberegg 2005: doctor treating lung disease, less likely to choose best plan when option to do nothing (40%) than when not given option 50%
Status quo bias and accountability
Sqb: prefer to accept than change decision-sameulson and zeclhauser 88: recited keep same allocation of pension dispute no cost to changing- although takes effort. A: Simon son and staw 92: increased A, increased sunk cost, greater need to justify og decision so stick with it for longer