Reasoning about probabilities Flashcards Preview

C82COG - Thinking > Reasoning about probabilities > Flashcards

Flashcards in Reasoning about probabilities Deck (40):

What is debated amongst mathematicians regarding the nature of probability?

Whether it is Bayesian or Frequentist.


How do Bayesians define probability?

Probability refers to a subjective degree of confidence, and as one can express confidence that a single event will occur, one can express the probability of a single event.


How do Frequentists define probability?

Probability is always defined over a reference class such as infinite number of coin tosses. Single events don't belong to a reference class so they cannot have a relative frequency or probability.


What do psychologists mean when they refer to normative probabilities?

If the probability is normative, the output of the system is the same that would be returned by a Bayesian machine. This is not related to whether the internal process is Bayesian.


What has much psychological research been conducted with reference to?

Single event probabilities (posterior probability).


What is posterior probability?

The conditional probability that is assigned after the relevant evidence is taken into account; in psychology the probability of a hypothesis (H) given data (D) = p(H|D).


What did Kahneman and Tversky (1972) write about humans and Bayesian evaluation of evidence (probability calculating)?

Humans are "not Bayesian at all".


What did Gould (1992) state about the human mind and probability?

“…our minds are not built…to work by the rules of probability”


What can the heuristics and biases literature be used to demonstrate about probabilistic reasoning?

Essentially, humans are irrational when it comes to probabilistic reasoning.


Can we reason according to Bayes Theorem according to evolutionary psychologists?

Yes, as long as the information is presented in a format which we have evolved to process (i.e. not probabilities, a modern form of mathematical notation).


What is Kahneman and Tversky's view on why humans are irrational when it comes to probabilistic reasoning?

Bayesian reasoning is too complex so we use heuristics, which are necessary because of the poverty of input. The mechanism and/or task are too complex and we're limited by our inability to reason.


What is Cosmides and Tooby's view on why humans are irrational when it comes to probabilistic reasoning?

We cannot do probabilistic reasoning because the information is in the wrong format (not frequency). The mechanism needed is simple (a few lines of computer code), and the task is therefore no more complex.


What is the problem with Cosmides and Tooby's argument regarding probabilistic reasoning?

It is a tautology and gives no evolutionary reason why Bayesian reasoning shouldn't have developed - even simple sea slugs exhibit habituation and all vertebrates can be classically conditioned. These processes can be described as Bayesian inferences - animal learning approximates Bayes Theorem.


What processes can be described as Bayesian inferences?

Habituation and classical conditioning - learning approximates Bayes Theorem.


What are the advantages of the frequentist format?

- The number of events (sample size), which indicates the reliability of the decision, is retained
- Permits easy updating as new information is collected
- Reference classes can be constructed post-hoc according to new information as the reference class changes


What did Tversky and Kahneman (1982) study?

Base rate neglect in the city cabs experiment.


Describe Tversky and Kahneman (1982)'s experiment.

Participants are told that a cab was involved in a hit and run accident in a place where 85% of the cabs are green and 15% are blue. A witness identified the cab involved as blue, and the court found their reliability to be 80% correct. Participants were asked: "what is the probability that the cab involved was blue not green?"


What did Tversky and Kahneman (1982) find?

The actual probability that the taxi was blue = 0.12/(0.12+0.17) = 0.41, however most participants think the taxi was more likely to be blue e.g. >0.50 and most stated p=0.80.


What did Tversky and Kahneman (1982) attribute base rate neglect to?

The representativeness heuristic - participants focus on the witness' accuracy and neglect the base rate of the city's cabs.


What did Casscells et al. (1978) study?

Base rate neglect using the medical diagnosis problem.


What did Casscells et al. (1978) do?

Asked medical students: If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease? Assuming that you know nothing about the person's symptoms or signs.


What did Casscells et al. (1978) find?

18% responded 2% (correct Bayesian inference)
45% responded 95% (neglected base rate)


What did Casscells et al. (1978) conclude?

That even medical students ignore base rates for diagnostic problems.


What did Cosmides & Tooby (1996) do?

Presented a similar medical diagnosis problem to Casscells et al. (1978) in both frequency and probability formats. Participants had to estimate how many of those out of 1000 randomly selected who tested positive actually have the disease.


What did Cosmides & Tooby (1996) find?

When presented in a frequency format, around 95% of participants correctly answered 2%.


What did Cosmides & Tooby (1996) conclude?

That frequency format abolishes base rate neglect. The relevant modules cannot reason about probabilities because it's domain specific (can only be accepted in a frequency format).


What is a problem with Cosmides & Tooby (1996)?

Griffin & Beuhler (1999) and Evans et al. (2000) couldn't replicate their results.


How is the relationship between base rate neglect and frequency format shown in the conjunction fallacy?

A single-event version is used - participants are asked to "rank with respect to their probability", or "to how many out of a 100 who are like Linda do the following statements apply?", with the usual Linda bank teller problem. The fact that the bank teller category includes feminists and non-feminists is clearer in the frequency format. Research has shown that participants are more likely to avoid the conjunction fallacy when it's presented in a frequency format.


How are base rate neglect and frequency format related to the Monty Hall problem?

Both Aaron & Spivey (1998) and Krauss & Wang (2003) found that frequency formats elicited higher switch rates than probability formats.


What are preference reversals?

A key phenomenon that violates rational choice theory.


Describe a typical preference reversal experiment, e.g. Lichtenstein & Slovic (1971).

Participants are presented with pairs of monetary gambles or 'bets' which contain the possibility of winning/losing certain amounts.
- One bet in each pair is relatively safe and has a high possibility of winning a small amount (P-bet)
- The other bet is more risky and has a small possibility of winning a large amount ($-bet)
Later the bets are presented individually and participants are asked to provide monetary value for each one.


Why do preference reversals violate RCT?

Because typically participants prefer the P-bet in the choice phase but rate the $-bet with a higher monetary value, which according to Angner (2002) reflects its utility. Such inconsistent behaviour appears to be a robust phenomenon and suggests human irrationality is 'systematic and widespread'.


What did Tunney (2006) discover?

That preference reversals are diminished when presented as frequencies.


Describe the mammography problem (probability format).

The probability of breast cancer is 1% for a woman aged 40 who participates in routine screening, and if a woman has breast cancer, there's an 80% probability she'll have a positive mammogram (false pos. 9.6%). A woman of this age group has a positive mammogram in routine screen, what's the probability she'll have breast cancer? Actually 7.8%, but 95% of doctors say it's 80%.


Describe the mammography equivalent problem (frequency format).

Imagine a neutral healer in a primitive society (no probability theorem). In her lifetime she's seen over 1000 people, 10 had the disease, 8 of which showed the symptom. Of the 990 not afflicted, 95 did. A new patient has the symptom, do they have the disease? (8/8+95).


Why do frequencies elicit normative reasoning?

According to Gigerenzer & Hoffrage, Bayesian computations are simpler in frequency format as only the absolute frequencies of (D|H) and (D|¬H) need to be stored. Base rates aren't needed so you get neglect when they are needed in probability formats. Frequencies may elicit normative reasoning because they are so similar to natural sampling.


What three methods did Sedlemeier & Gigerenzer (2001) use to teach Bayesian reasoning?

In all, participants were given all the relevant information and told which numbers corresponded to which part of the formula.
1. Rule training
- Given formula, all participants needed to do was the calculation to return p(H).
2. Frequency grid
- Participants indicated p(H) by selecting one of two grids with the correct and an incorrect answer.
3. Frequency tree
- The tree doesn't represent individual cases but constructs a reference class (total number of observations) for each branch.


What did Sedlemeier & Gigerenzer (2001) find?

Initially all groups showed improvement, but after a retention interval the rule training group's performance declined, i.e. they began to neglect base rates again, whereas the frequency representation group retained their knowledge for a period of at least 5 weeks.


What do Sedlemeier & Gigerenzer (2001)'s findings suggest?

That Bayesian computations are simpler when information is represented in natural frequencies.


What did Goodie & Fantino (2001) show?

That base rate neglect disappears following exposure to outcomes but can return under time pressures.