Exam 1 Flashcards

(102 cards)

1
Q

Traditional economics

A
  1. Assumes people are rational, willful, and self interested
  2. You can change behavior using 2 methods:
    A. Incentives (usually monetary)
    B. Education
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Problem with traditional economics

A

Doesn’t always work. More on this later, but we see other things can change people’s behavior besides money and education.
Ex: want to have people re use towels. 2 messages: 1. Help save the environment. 2. Join your fellow guests in helping to save the environment.
Number 2 is a lot better, because it gets people motivated. Uses psychology to get people to do it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Behavioral economics

A

Alternative approach to traditional economics.

  1. Assumes people predictably deviate from optimality. (We are boundedly rational, boundedly willful, boundedly self-interested)
  2. You can change behavior using psychology (e.g. “choice architecture” or “nudging,” emotion, etc.)

Ex: if you’re trying to have people take stairs instead of escalator, put the escalator on the side and stairs wider and in middle.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

When Evidence Says No, but Doctors Say Yes reading

Evidence Based Management reading

A
  1. Lot of problems with drugs and treatment. Doctors go with their intuition over what evidence says. They use what they learned in school. It may be wrong but they trust it because they learned it that way. Also avoid liability issues by giving treatment.
  2. In general people not using enough evidence. Evidence based management helps have better decisions, eliminates iron-clad and emotional reasoning, best choices can be made.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Outcome bias

A

The tendency to evaluate the quality of a decision by the outcome of that decision.
We cannot evaluate this decision without considering the unobservable counterfactual.
Ex: say we need to decide if we should release a film in April or December. Film needs to gross 360M to be profitable and it gets 2.7B. Is this a good decision?
Not necessarily, we need to see what would have happened had we released the film in December.
Don’t evaluate decision by the result of the decision, instead need to be evidence based.
Even in football if you score a touchdown on the play it doesn’t make it a good decision. We need to consider the unobseravle counterfactual.
You can make a good decision and get bad outcome and vice versa because of luck.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What do we do when we can’t observe the counterfactual

A

Decision making is easier when we can predict the future. We want to know: if we do X, Y will happen.
But, we often can’t do this. And we can’t observe the counterfactual. So what do we do:
Approach 1: rely on salient examples/stories/experience. For example, maybe you know of a similar film that was released in April and did well. Problem is this is casual benchmarking —> copying a successful company in your industry. Not good, as you don’t consider why it worked. Like Gates and Zuckerbhrg dropping out and then you dropping out. Not a good idea.

Approach 2: look for examples of what is done. For example, maybe most of the films of this genre are released around April. Problem with this is thus film could be better than the rest of the genre. Also just because done one way doesn’t mean it should be done that way. Consider why it was done that way and why it worked.

Approach 3: look for data on consumer demand. For example, maybe target audience is more likely to go to the theatre in April to watch this kind of film than in December. Prob,dm here is sampling error and could be bad data.

Approach 4: look for data on effectiveness of different film release times. For example, maybe most films of this genre that get released in April outperform most films of this genre that get released in December.
Problem with this is not good data necessarily and still don’t know causality. Need to do experiment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Experimentation

A

Best way to test causality is to run experiments.
Experiments have 3 defining features:
1. Independent variable: the thing you change across conditions/treatments
2. Dependent variable: the thing you are trying to change
3. Random assignment: randomly assign units to conditions/treatments. Random assignment helps ensure the treatment(s) vs. control differ ONLY in treatment. All else equal, changing the ind variable in this way changes the dependent variable this much. Allows us to get rid of the confounding variable causing the change. That’s why we need to randomly assign. So not just correlation but can test causation.
Without random assignment, ultimately a correlation. Correlation not equal to causation
Some evidence is presented as causation even though it is a correlation. Be careful for that. Make sure there is random assignment and a study done for it to be causal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Example with experiment:

  1. Say we need to choose between emotional or information advertisement
  2. Evaluate if Disney acquiring Fox for a large amount of money is a good decision
A
  1. With advertising, randomly assign some people emotional and others informational. What data can you use if you can’t observe the counterfactual?
    Look at comps, which ad types has the best affect, industry studies, your company’s best in the past. Consumer studies on ad types.
  2. Make a model, see the data, scenario analysis, DCF, comps, precedents.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why does correlation not equal causation?

A

Some difficulties drawing causal conclusions from correlational evidence.

  1. Reverse causality. Ex: Those who quit smoking are more likely to die from lung cancer.
  2. Third variable causality. Ex: health benefits of vitamins, global warming up and less pirates. Stop global warming, become a pirate.
  3. Selection biases. ex: going to an elite private school increases your earning power, but really the people selected at the school could be the best people and would’ve had good earnings even without the private school.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Conclusion from evidence based I

A
  1. You should not evaluate a single decision based on the outcome of the decision (you need to know the counterfactual) —> don’t fall prey to outcome bias
  2. The best way to test causality is to run experiments. Independent, dependent, random asingent
  3. Correlation is not causation. But often presented as such, so be careful.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

AB test reading

How little we know reading

A

AB testing: test out different web designs in real time by giving some people one version of the site and others a different version
Allows for data decisions over intuition or HIPPOs (highest paid persons opinion). Choose everything, data makes the call, risk is only making tiny improvements, data can make the best idea obsolete.

How little we know: hard to explain good vs. bad decisions. Look back and say if good or bad when innovate: expand vs drift/stray, revisionist history
Can do experiments on similar things. Where to place something in the store, hard to do it in larger things. Dangerous when stories posed as science.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do you evaluate a decision

A

Biggest impediment to understanding the past is that we know the future.
Don’t evaluate past decisions on what wound up happening. We see that in the how little we know reading, where they evaluate the LEGO decisions based off if it worked. Explaining decision after the fact when already know what happened is not good!
Ex: coin 1 has a 55% chance of landing heads and 45% tails. You choose heads and it lands tails. It was still the right decision, the result doesn’t impact whether it is a good decision or not.
Same thing if it were project 1 or project 2 with same success odds. Can’t evaluate decision by what happened.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Being evidence based

Example: NYC consider placing a 16 ounce limit on serving size of soda. how could we test this?

A

What do we want to know? what do we actually know? how can we make progress?
Evidence reduces some uncertainty. But even with a well conducted experiment, there will be some uncertainty.

NYC ex:
1. Randomly assign some parts of the city to have the policy but not others and ask people in different parts of the city to track everything they are eating and drinking. Compare the reported consumption of those in portion limit parts with those in no limit parts

  1. Randomly assign some restaurants to cap sugar sweetened beverage sizes at 16 ounces. See how many calories customers consume in the restaurants with vs without the restrictions.
    Need to find restaurants willing to test this and randomly assign days for the restaurants to have the policy in place. See how many calories customers consume on the days with vs. without the policy in place.

However, even if ideal experiment, wouldn’t definitely know if placing a limit on soda reduces calorie consumption. No experiment is perfectly valid.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Is the experiment valid?

A

Internal validity: validity within the experiment.

External validity: validity outside the experiment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Internal validity

A

Validity within the experiment. Have you really established a causal relationship or did the conditions vary in some way other than the treatment? Did the independent variable influence the dependent variable? Is there really a causal relationship?
Did random assignment fail?
For example, if just by chance, we randomly assigned more chain vs. non chain restaurants to the portion limit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Threats to internal validity

A
  1. Random assignment didn’t actually take place. Problem with this is there would be confounding variable in that case
  2. Small number of randomly assigned units. Need a large sample size to prevent just a few answers from carrying a lot of weight and not representing the population
  3. There is a problem of attrition meaning that people drop out of th study before being measured. Skews the data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

External validity

A

Validity outside the experiment.
Will this work in a different setting or situation?

Does that casual relationship generalize to other situations (and in particular, the situation I am interested in)?

For example, perhaps the policy doesn’t work at the restaurants we tested but would work in convenience stores or schools.
The best way to address this is to conduct multiple experiments under different circumstances, with different materials, participants, etc. If all experiments suggest the same result, you can be more confident that it generalizes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Random assignment vs random sampling

A

Random assignment: randomly assign units (people, restaurants, zip codes) to different conditions.
Random assignment ensures internal validity.
People inside the experiment already being assigned to different groups (I.e. treatment vs control)

Random sampling: randomly sample units (people, restaurants, cities) to include in your study. This ensures external validity

Population then random sample to get your sample. Then randomly assign to get your treatment group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

RA or RS:
1. Keith exposes half his participants to an episode of sitcom and half to violent show and then observes them for signs of aggressive behavior.

  1. Laurie picks 100 people to be in her study on the effects of listening to music while studying.
  2. Chris puts 20 children in a drumming class and contrasts their drumming abilities with 20 children who haven’t had any drum instruction
  3. Melissa wants to study the effects of running on happiness. She selects 50 runners and 50 non runners to her study.
A
  1. Random assignment
  2. Random sampling
  3. Random assignment
  4. Random sampling.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Some caveats with experiments

A

Sometimes experimentation is not possible or worth it. It may be too costly to randomly assign zip codes to restaurants to adopt the portion limit and then test the effects on long term calorie consumption. But, don’t give up. Need to do the best we can to accumulate good evidence, while recognizing the limitations of our approach.

When you can’t experiment, you could look at other cities, but could be confounding variables, look at nyc before and after

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

When to not experiment

A
  1. If the experiment will be too costly either in $$ or time
  2. If the benefit of the experimentation is low (either bc the variable will likely have no effect or because potential payoff is small)
  3. If it is physically, legally, or ethically impossible for you to randomly assign units to treatments.

However still always be evidence based!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Lessons about being evidence based

A
  1. Because we usually don’t observe counterfactuals, we cannot know whether a decision was good or bad
  2. Causal inferences depend on random assignment
  3. Understand the quality of the evidence you are using to make a decision. Ask “how do I know?”
  4. Always strive for higher quality evidence
  5. Be willing to act on that evidence
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Understanding evidence: randomness and chance

What chance looks like

A

We all share a stereotype about what randomness and chance look like. For example, we think the same number shouldn’t appear too many times in a row, but in real data it often will. In fact, you can tell some streams are fake when they don’t have a lot of repitition.

Chance is streakier and lumpier than we think
And this leads to Important errors of inference.

For example, Apple iPod shuffle customers complain because think certain artists playing too often than they should. But really, it was random and that’s just what happened (also many things that could happen to make it seem not random, like same artist next alphabetically or have the same word in the song). Have to make it less random to make it feel more random.
Non-randomness not in the ipods but in ourselves.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Cancer cluster myth reading

The odds of that reading

A

Cancer cluster: bunch of neighborhoods with more people than normal with cancer. But really is random chance, not meaningful. Expect anything in the short run, won’t necessarily look random but generally is.
The odds of that: lot of things can happen, seem like there is a reason why, really just a coincidence. Seen with scneintisits deaths. Looks like reason why but is a coincidence.

Trying to make sense of events that are mostly just coincidences. Personal connection attaches to you to try to find reason.
Hinckley town in California, carcinogens in the water, but not finding that much more cancer there than anywhere else, so prob just a coincidence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What chance looks like: published data
Example where 12 different conditions and after manipulation take a 20Q trivia test. 6 conditions should score lower and 6 conditions should score higher. We see that this is fake data and manipulated data. Results in way less steaks than there should be, never more than 3 streaks in a row. Also too evenly spread apart and 6 low conditions are actually low. Did probability of getting numbers so close to each other in the same distribution and found it to be very small. Admits data is fake. Because it is generated by humans, fraudulent data is not streaky enough
26
Roulette and meaning of it
Betting where a ball will land on a table. Casino shows what recently happened to try to draw people in and bet against the streak (bet that the streak will end). With longer red streaks, we see more people betting on black. When we believe that chance is constant, we expect to see streaks end. Gambler fallacy.
27
Gambler’s Fallacy
When we believe that success/failure rates are unchanging, the probability of failure seems greater after a string of prior successes. I.e. we believe that chance will correct itself. People are less likely to be on lottery numbers that just won. Really though, no more or less likely to win as time goes on. Probabilities are independent.
28
Example: Imagine we had 50 tails in a row. What is true
Two things are true: 1. Absolute number is that we had 50 tails in a row 2. Proportion is 100% tails Say we flip another 2950 coins, we expect of that to be 50 percent T and 50 percent H, not some sort of making up for it, as independent probabilities. So, we would expect absolute number 50 more T than H (1525 vs 1475) And 50.8% tails now. Not 50% and not even number, as just even after. Won’t correct itself with more heads! Will be even in long run after the streak. T still > H Chance produces bigger differences than we think, as we see when we flip 20 coins, the distribution is a lot more spread out in terms of how many heads we get when we actually flip the coins vs when we come up with what we think it should be (we think should be 8-12 heads but data is anywhere from 4-16) Chance produces bigger differences than we think
29
Nfl draft example
We see pro bowlers by drafting team is big difference, but how much of this is due to skill vs luck/ chance? Cade Massey looks at pro bowlers by player birthday and see a similar distribution. So even dramatically disparate outcomes can be by chance.
30
How to test if in a high chance environment
Correlate 2 measures of the same thing. If the correlation is greater than 0.7, then it’s pretty stable (could be due to skill) Example: exam 1 vs exam 2. If see high correlation then probably due to skill not luck, as average luck follows good luck always. In the nfl draft, see this year vs next year draft performance has negative correlation suggesting by chance — if do better one year vs the next, doesn’t matter
31
Mutual fund performance - how to tell if in high chance environment
Is annual mutual fund performance largely determined by chance? Correlate year to year performance. If it is low, then disparities in annual mutual fund performance are largely chance driven.
32
Baby example with large and small hospital
Smaller hospital: 15 babies per day; larger has 45 Over 1 year, each hospital recorded how many days more than 60% of the babies born were boys. Which hospital recorded more such days? Likely the smaller hospital, as lower sample size means greater variance (greater dispersion around the average) and less reliable knowledge/more chance.
33
Sample size: raquetball march
If you are inferior player, you prefer a shorter match length/series as more chance to win in the shorter Match If you are better, prefer longer one, as less variance and luck. More skill will come. If you’re better, skill shows with more reps. Larger sample size has less variance and more normal. Sample mean is closer to the population mean. Larger smaller size, more confident you can be that your sample mean is a good representation of the population mean.
34
Sample size neglect: interviews
Reliance on interviews in hiring and admission is neglecting sample size as only considering one small encounter with someone and lot of confounding variables. Should have more interviews, more standard questions, multiple days, case studies. Could have 2 employees seperatly interview a series of candidates. Have them rate how much they’d like to hire each candidate. Correlate the ratings. Low correlations mean lots of noise in either of the interviews, the raters’ judgements, or both. You need longer/more interviews, more ratings, and/or more disparity in interviewees’ performance. Disparities are more likely to be caused by chance when smaller sample size and differences in ability are small. But, even with large samples size, chance is surpringly powerful and can be responsible for large disparities in performance. We see comparing football 2009 vs 2010 results has a lot smaller correlation than 09 and 10 together vs 11 and 12 together, as larger sample size results in less variance and less noise Also money ball strategy not working in playoffs because small sample size, more chance.
35
Other lessons from chance
1. Even large disparities may not need explanation 2. Test chance by correlating measured with themselves 3. Small sample sizes = more chance.
36
What if games were shorter reading | Science isn’t broken reading
1. Smaller sample size, more variation, more likely for the favorite to lose. 2. Science is hard, studies hard, usually fail and subjective, but science isn’t dead. Just need to do a good job not faking things. Easy for studies to be subjective. Lot of decisions that you need to make in the study even if presented w the same data, as we saw with the racial bias and red card soccer data.
37
Odds of lottery on 9/11/02 being 911
Odds of that are 1/1000 Odds of something like that, though, are higher. Could be other numbers that would carry significance like the flight numbers of sequence in a row. The probability of an occupancy depends on how you define it. “That” /=/ “something like that”
38
Testing if something is meaningful
It is usually wrong to calculate the odds of a pattern after you have noticed it. But this is often what we do. We notice an unusual pattern and then we consider the odds of that exact thing happening. Instead, if we notice an unusual pattern and you want to test if it is meaningful, you can: 1. Test same prediction in a new data set (replication) —> next year, will we get 911 2. Test a new prediction from the same hypothesis —> will we get some other meaningful number from next lottery.
39
Coincidences
A surprising concurrence of events perceived as meaningfully related with no apparent causal connection Something that made you think, what are the odds of that For example: mutual fund outperforming the market 5 years in a row How do you know if it is a good idea to invest in the fund? Notice big difference between noticing a particular fund outperformed the market 5 years in a row vs predicting it and observing this.
40
Predicting vs observing and statistical significance
Statistical significance is the probability that chance is responsible for a statistical difference of pattern is less than 5% This hinges on researchers specifying their predictions (and methods of analysis) in advance. If you find a result, you should replicate it rather than reporting it right away.
41
When significance test is meaningless
``` When people (or algorithms) are asked to mind data for patterns without any pre-specified hypotheses Patterns will arise whenever mine large data set. For example, you might discover that single wealthy men over the age of 40 are more likely to buy your product But, for practical purposes, a relationship is true if you can predict it in advance. Data mining can be valuable for generating hypotheses that you subsequently test in other datasets (replication). But don’t data mine to prove a relationship. To replicate, there is an easy fix. Data mine a random subset and if you find something interesting see if it replicates in the other subset —> essentially break the data into two and data mine on half of it and test the hypothesis on the other half. ```
42
Common ways of p-hacking
1. Stop data collection if and only if p < .05. 2. Analyze many measures; report only those that were p < .05. 3. Analyze many conditions; report only those that differed at p < .05. 4. Use covariates to try to get p < .05. 5. Exclude participants or trials to try to get p < .05. 6. Transform the data to try to get p < .05
43
False positive psychology and simulating p hacking
Using a conservative combination of common ways of p-hacking increases the false positive rate from 5 to 61 percent. False positive rate goes way up when you do more things, such as running two studies and dropping one if it isn’t sigfnicsnt or collecting 20 observations per condition and then running 20 more per condition. False positive rate way above 5 percent Change the dependent variables to find one that affects the independent variable significantly.
44
Ways to prevent false positives
Open science: researchers required to disclose everything they did. Preregistration: post exactly what they plan to do, especially for confirmation of hypotheses
45
Alpha, p level, statistically significant, false positive
Alpha level (significance level): probability of rejecting the null hypothesis when the null hypothesis is true (I.e. what the false positive rate should be assuming the researcher runs only the specified analysis). Researcchers set this (usually .05) P-value is the prob of obtaining a result as extreme as, or, more extreme, than the result actually obtained when the null hypothesis is true. Something is statistically significant when the p-value of your test < alpha level. False positive (type I error): incorrect rejection of a null hypothesis (saying that a relationship or difference between groups is statistically significant even when it is actually due to chance)
46
Why the biggest winners are almost always lucky reading | The triumph of mediocrity reading
Luck is very important, good tie breaker, important when more people are involved Example with meeting sister —> neighbor from same tiny town, unlikely in this example, but odds someone somewhere with this isn’t that crazy. Points of the story is that rare and lucky to happen to him, but not that crazy in general. 2. Luck plays an important role, regression to mean. Secrist thinks just regression to mean with human controlled things like businesses, not weather. But he measures weather wrong and only looks at places that are just simply warmer or colder than others. Better way to do this is pick places ver close by and see how regress to mean. Galton sees this with height regression where tall parents are taller than their kids. On pace in sports is not fair, as luck in second half will close/get better or worse and result in difference results.
47
Regression to the mean
When two variables are imperfectly correlated, extreme values on one variable will usually on average be associated with less extreme values on the other.
48
Example with struggling freshmen in summer program and gpa going up from 2.5 to 2.9 vs other people not doing it and gpa down from 3.3 to 3.2 What is performance composed of?
While may seem effective, don’t truly know. Could just be regression to the mean or other things —> no experiment and no random assignment so we don’t really know. Need to do experiment with control groups and random assignment to truly see. Performance is ultimately composed of 2 things: 1. Skill (anything stable: knowledge, motivation, etc). Note that age counts as stable bc everyone is getting older 2. Luck (anything random). Return to average luck over time. Average luck is most typical thing and follows bad and good luck To test for luck vs skill, look for correlation between performance on exam 1 vs exam 2, two things that predict the same skill set. We see a regression to the mean with a negative correlation between exam 1 performance and improvement, as if you did better on exam 1, you on average decline on exam 2, as your luck gets worse We see this with madden cover and SI cover where after such a great year you do worse. But not really a jinx, is just regressing to the mean. Luck includes injury, regression to the mean.
49
Fallacy of intervention
Don’t know. Could just be regressing to mean! For example, if investor picked 10 bad stocks in a row and then gets a tip and does better or crime rate bad and get new police chief and better. Or, manager retires after great year and next year does bad, not necessarily new managers fault could be regression to mean. Or after rewarding someone their performance falls. This is called the fallacy of intervention: Interventions at extreme high points look like failures. A new ceo after a company has an extremely good year Interventions at extreme low points look like successes. Firing a manager/ceo or hiring someone new, policy aimed at correcting a recently extreme problem, any action aimed at overcoming extreme pain or disappointment
50
Why do extremely intelligent women tend to marry men less smart and vice versa?
Regression to the mean!
51
Why do we make bad decisions
What you see is all there is. We don’t go much beyond what we’re given. For example, ask a question with 11 choices and 12th is other. Then ask same q with same first 6 choices and 7th is other. While in the first question 7-12 is 54%, now it is just 7%. You are more inclined to think of something when it is right in front of you —> what you see is what there is. Don’t think of missing metrics. Another example with purchasing insurance: if presented with 3 options for your flight to cover: a. Act of terrorism, b. Non terrorism related act. C. Any cause. You would expect willingness to pay for c to equal a + b In reality, they are all roughly the same. Because when presented with c, you don’t think of it as a + b, you think of it as c Same thing with cell phone plan. You look at it how the data pr sents it. What you see is all there is Preferences depend on how it is presented to you We don’t go much beyond the information given. We don’t think thru all the possibilities, we don’t reframe things, we are heavily dependent on context, we are strongly influenced by the first thing that comes to mind.
52
Heuristics
Mental shortcuts/rules of thumb for making judgement. Heuristics save time, are good enough much of the time, and are prone to systematic error. For example, more is better. But if overused, wrong. More isn’t always better.
53
Availability heuristic
We are more likely to overestimate the likelihood, frequency, and causal impact of things that spring to mind, especially things that: 1. Have come to mind if frequently or recently 2. Are the focus of our attention. Ex 1: insanity plea. Americans think it happens a lot more than actually does because it is overstated by the news. Ex 2: you think more 7 letter words end in the form -ing than five blanks then n then blank, as the ing is available to you already. Ex 3: overestimate causes of death like starvation and underestimate causes like respiratory infections (pneumonia) Ex 4: Americans think more crime than there is bc of the media. Becomes a problem when you’re influenced by it. For example, hearing only about lots of people getting rich by investing in real estate makes r seem impossible for RE values to decline Hearing only about the economy getting worse may actually make the economy get worse (for example encouraging businesses to cut costs and forgo risks)
54
Representativeness heuristic
We judge the probability that A belongs to B on the basis of how similar A is to the prototype of B. The more A resembles B, the more likely A is to go with B (like goes with like) Key thing with representativeness heuristic is base rate neglect. Ex: description of Susan, who is very shy and withdrawn, helpful, but with little interest in people or reality. And need for order and structure, passion for detail. Meek and tidy soul. Is she a lawyer, librarian, or teacher? You would think she’s a librarian. But, think about how many fewer librarians are than the other 2, which she could also be. Need to consider that and how therefore she is less likely to be a librarian even though her description matches what a librarian is We’ve talked about this with cancer cluster myth, same base rate neglect Type thing. Also with: 1. sample size neglect (we take small samples of information and draw strong inferences) 2. Misconceptions of chance. If something is random, it should look random. Gamblers fallacy 3. Regression to mean. If something performs well, they are representative of a superior performer (we see a rare event and assume it is normal ignoring chance)
55
Conditional probability
More base rate neglect is with conditional probability. Need to consider the odds of both the things happening. For example with oj Simpson, consider odds of husband murdering her given that the husband abused her AND she was murdered. Need to consider both the things. Gets a way different probability.
56
Anchoring
Peoples estimates of unknown quantities are easily biased by what values they consider. Even when those values are obviously arbitrary and irrelevant. Ex: ask people how much painting worth —> if you give them a suggestion, they will guess closer to that number. Ex: real estate agents set fair purchase price differently depending on the list price they are given. Naturally look for things to confirm the price their given and turn blind eye to other side. Unknown quantity moves toward the anchor. During a negotiation, first offers are anchors because set tone and then insufficient adjusting. Even if you know meaningless, still works as an anchor if it is unknown, number is in your mind, biasing you to think that way. See that with wine price and last digits of ssn. And microwave cost.
57
Anchoring conflicts of interest
Estimators paid based on how accurately they estimated the value of the jar of coins. Advisors who knew the range of possible values but not the exact amounts gave them advice before each estimate 3 groups of advisors: 1. Paid based on estimators’ accuracy (estimators knew this) 2. Paid based on estimators upward bias (estimators don’t know) 3. Paid based on estimators upward bias (estimators do know) We see that advisor suggests more in 3 than 2 than 1 and estimators guess more in 3 than 2 than 1. When they know they know, give higher estimates. So anchoring works because trust the number given to you and you adjust insufficiently. Even though you know there is likely a bias (group 3)!
58
Why are anchors so effective
Sometimes believed to be informative to signal that the true answer is close by In cases of extreme uncertainty, any value that comes to mind will seem plausible Anchor are defaults: we need good reasons to give answers that are far away from them (but not to give answers that are close to them) Even when anchors are obviously uninformative and known to be biasing, we have no way to know how to undo the bias.
59
What’s the trouble reading | Dropping anchor reading
1. Heuristics affecting medical care. Looking at what is there example with a guy who has chest pain and the doctor thinks he is fine. Takes shortcuts and rule of thumb and representativeness (what’s typically the case). Navajo woman who is in a village with lot of people who have pneumonia, actually has something else but you think pneumonia given availbigy and confirmation bias because only selectively see her symptoms which match those with the condition. Affective error: decision on what you wish is true. 2. Anchors can carry value and influence everyone. Example with real estate agent appraising differently based on list price.
60
Hindsight bias
Once an outcome has occurred, we overestimate the likelihood that we would have predicted that outcome in advance. Outcomes seem less surprising than they should seem. Outcomes seem more controllable than they actually were. We are more likely to assign blame to those who failed to predict an outcome that was hard to predict. Managers should judge the quality of employees’ decisions before outcomes are realized. Employees should make sure that managers judge the quality of their decisions before outcomes are realized. As opposed to outcome bias where you judge decision based on what happened. Big difference is hindsight is saying you should have been able to predict it would happen while outcome is evaluating quality of decision rather than predicting something to have occurred.
61
Curse of knowledge
When we have private information, we expect the uniformed to behave as if they know what we know. Example with name that tune: tap the rhythm of a song, predict percent of listeners to guess it right. Estimate 50 percent actually only 3 percent as you overestimate how much others know. Other examples with teachers thinking their tests are easy or you thinking your essay is more clear than it is. Or butterfly ballot in election, you don’t even realize that this could be confusing.
62
Attitude projection
``` We tend to project our own attitudes, beliefs, and experiences onto others. Survey where we guess amount of other people in the class liking things We are good at projecting things when trying to guess attitudes, beliefs, and behaviors of others who are similar. Projection is bad when trying to guess the attitudes, beliefs, and behaviors of others who are dissimilar. ```
63
Solo comparison effect/competitor neglect
You fail to think about what your competitors are doing For example, eBay auction and you want to put your bid ending at the most popular time. Problem, is you are neglecting your competitors who will also be doing it at that same time. Need to consider what life is like for other people, not just for your own performance. If the test is easy for you, probably easy for others looking at it as well. However you’re more willing to bet on a trivia question if easy question than hard question even though everyone getting same question.
64
Lesson number 1
We fail to fully appreciate what life is like for those who do not share our knowledge or perspective
65
Lesson 2
We often fail to attempt to appreciate what life is like for other people
66
Lesson 3 and corollaries
Lesson 3: we often assume that we see the world as it is. And that those who do not share our knowledge or perspective are biased, ignorant, or uninformed. 3a. We assume those who are uninformed can be persuaded by informing them 3b. We assume those who are not persuaded by information must be biased or stupid. 3c. Because those who hold different beliefs must be biased or stupid, we tend to reject any proposals made by such persons.
67
Why is the media biased against us not them reading | Connecting the dots reading
1. More hot button issues, identify with in-group, both groups see bias from media and think working against them. 2. Hindsight bias. After all these events, like Yom Kippur war and 9/11, looking back on facts leading up to them and realizing should have predicted them. But at time lot of fake messages, can’t know which are real. Need to see thru the noise an not overreact which is hard. After event you can connect the dots and think you should have seen it coming Creeping determinism is trying to fix the problem and instead making opposite problem.
68
The hostile mediator effect | Hostile media effect
Third party mediators are often perceived as bias by both parties as having been more receptive to concerns of the other side. Same thing but with media: think the media is bias against you, we see this where pro Israel Jews think media is bias against Israel and Arab think bias against Palestine.
69
Reactive devaluation
Ex: pro Israel Jews and pro Palestine Arabs read the full test of an actual Palestinian peace offering. They’re told the author was either Israeli or Palestinian. Reactive devaluation shows how Jews think proposal is worse when they’re told it is a Palestinian source than when Israeli source, even though actually is same source. And same thing vice versa with Palestinians (think more pro Palestine from Palestinian author) Looking for conformity evidence not disconformery evidence. Confirmation bias is favoring information that confirms our existing beliefs. This influences our information search, our impression of info, and recall of info.
70
Confirmation bias
``` We tend to favor information that confirms our existing beliefs. This influences our information search, our interpretation of information, and our recall of information. We see this when get into pairs, sequence is 2,4,8 and you think numbers are being multiplied by 2 as the rule. The rule is actually more broad, is just increasing sequence. But you think this because right in front of you and confirms your existing belief See similar thing with being sane in mental asylum and they diagnose you even though you are normal (study done to show this) Also forgetting (-2) squared is 4 not just 2 squared is 4. ```
71
Visual perception personality inventory and the Barnum effect
Visual perception personality inventory is an illusion that gives you a description based off something, such as seeing either a duck or rabbit in an image that could be either. Gives general vague descriptions for people in either bucket and you see yourself in that bucket Barnum effect shows the tendency for people to accept information that is supposedly tailored to them, like personality tests, as accurate even when the descriptions are vague enough to apply to most people.
72
Name the bias: 1. Dylan wants to surprise his girlfriend. He is nervous because he thinks she has it figured out even though she has no idea. 2. New craze with Popeyes chicken sandwich and Tom decides to open a Popeyes himself 3. Married couple go to Couples counseling bc fighting but they both think therapist is bias.
1. Curse of knowledge 2. Competitor neglect/solo comparison effect 3. Hostile mediator effect.
73
4. Marshall is organizing office Holliday party and he wants a DJ over a band and he think nobody likes bands at this kind of thing 5. Mia wants to purchase a watch, listing on eBay says retail price is $350 so she thinks slightly too high but reasonable and bids 300 6. A. Barry is an amateur actor. Deciding between roles for 2 different tv shows and after the pilot the show he chose gets cancelled “ugh I made a bad decision” he thinks 6. B. After more thought he realizes director hasn’t had much success and other costars wee too new. “I totally knew this was going to fail”
4. Attitude projection 5. Anchoring 6. A. Outcome bias 6b. Hindsight bias.
74
Ways to avoid: 1. Hindsight bias 2. Curse of knowledge 3. Competitor neglect and solo comparison effect 4. Anchoring 5. Hostile mediator/media effect 6. Confirmation bias
1. Make predictions ahead of time, recognize that things are more obvious after the fact 2. Pilot test, use more concrete language 3. Think a step ahead. Realize the way that things affect us can affect others similarly 4. Decide in advance what you think is right 5. Get diverse opinions. Ask the people you want to know about instead of assuming 6. Recognize that your bias makes neutral parties seem biased themselves. Establish common ground first 7. Try thinking the opposite/look for discomforting evidence.
75
Dr. Drug Rep | This article won’t change your mind readings
1. Dr. Working for drug company, paid to give talks to doctors promoting the drug as an anti depressant. He only focused on good things with the drug, faulty ways to present data. He had good intentions but because he is being paid to endorse them, he turns a blind eye to the bad data and doubles down on evidence to support his point. Eventually recognizes it and quits Going in with good incentives that drug works but then only looking for good parts and ignoring evidence against the drug. Problem with incentivizing for being less bias is they think they’re unbiased. 2. Cognitive dissonance is when you have extreme discomfort with holding 2 thoughts in direct conflict in your mind People believe they are right even when evidence is against them. Informational silo, push back/don’t believe contradictory facts. Groups can help slightly but hard to overcome this. Accept their beliefs as fact and can’t listen to facts against even if you know you’re wrong
76
Princeton Harvard injury example
Player on Harvard football team gets injured, yale guy insists he is the one who injured him, but there is clear video evidence it was someone else. But he is genuinely convinced that he did it. He has a reason to believe he did it so he does believe it.
77
Princeton and Dartmouth students with penalties
Princeton and Dartmouth students watched the teams play in football, Dartmouth is penalized 70 yards and Princeton 25 Each set of students recorded the number of penalties by each team Dartmouth students say it was even at 4 penalties each. Princeton say way more penalties for Dartmouth than Princeton. Biases impact your decision, you want the see the penalty on the other side and not on you
78
When prophecy fails
According to cult leaders, superior beings from a planet informed them humans destroyed by a flood on December 19th Only true believers survive. Research question is what happens on December 20th when no floor They say no floor because we believed and saved world from destruction.
79
Ineffectiveness of mixed evidence
When people see evidence for and against their position, they become even more extreme in their beliefs For example, look at studies for and against capital punishment, one study is across states and finds capital punishment works, the other is across time and finds it doesn’t work The students become more critical of the disagreeable studies, become more extreme beliefs after exposure to mixed evidence!
80
How do we subject claims to scrutiny
In a biased manner! If we like the claim, we will ask ourselves the question “Can I possibly believe this?” If we don’t like it, the question is “Must I necessarily believe this?” Huge difference in can I vs. must I, massive bias in the evaluation of studies. Can I believe it: biased memory search for consistent info, partial or truncated search for info, superficial processing of info—> trying to get to yes! Trying to look for reasons why it is true Must I believe it: biased search for disconfirming info, demand more evidence thorough consideration, look for exceptions to the rule —> trying to get to no! looking for reasons why not true. This is different from confirmation bias as this is motivated reasoning. You want to believe something. Confirmation bias isn’t necessarily about wanting to believe something, but rather than you think it ahead of time and it comes true.
81
More examples with can I vs must I | Sports gambling, enzyme study
Enzyme study: told participants that if you have some enzyme deficiency, then you are likely to get pancreatic disease Those who are diagnosed with the deficiency think it is less seriousness and more prevalent and less accurate test than those who aren’t diagnosed. You don’t want to believe it, and you want to think others have it as well if you’re diagnosed. And you want multiple tests to be sure you’re diagnosed Sports gamblers spend more time explaining losses than wins. Wins are wins, but losses are near wins. Blame luck for losses, skill for win. Extremely general: we explain away our failures and take credit for our successes. This is called self serving bias.
82
Motivated forensics
If you work for the prosecution, you are more likely to think the defendant is psychotic and recidivist This is because people believe they are reasoning about a desired outcome, they do not realize they are not being objective This is why self serving biases are often unconscious. Not making a choice to do this, and you think you are being objective, but you are just naturally bias. Motivated reasoning is a function of desire and ambiguity. Want to prove something and unsure exactly and you interpret the evidence in your favor
83
Motivated auditing
When good accountants prefer specific practices if better results come out of that practice. Prefer overly aggressive auditing practices to benefit clients Possible solution is quality assessments and forcing them to justify the best practice. But, we see that when they do this, they are more likely to choose the client preferred as they think it is appropriate because listing it, have reasoning why. Feel more justified in doing it since they list it and justify it.
84
Rewards and punishments and disclosing conflict of interest
What if you incentivize doctors for being more objective or punish for being bias? Problem is they already think they are unbiased, so won’t do anything And if they are required to disclose to patients that they have conflict of interest, doctors might feel licensed to act in the interest and patients feel pressure to help the doctor or to make a choice that implies they trust the doctor Similar to anchoring bias question when guessers guess higher when revealed the other side is paid based on how high the guess is.
85
Motivated researching
Looking to prove something is significant and p less than 0.05 Suppose you ask patients how effective a drug is. And 4 peoples results aren’t significant but the 5th persons is. Then you get rid of the first four results and only consider the 5th. That is motivated researching as you’re only focusing on the significant one. Since researchers are strongly motivated to find statistically significant results, the peer review is there to try to fix this Peoeple with no stake in a result are asked to review the methods that produced it. The problem is the researcher here would only report Q5 and not the rest, the reviewer can’t even see that Q1-4 existed! So the peer reviewer doesn’t know. The person with the conflict of interest has decided, using motivated reasoning, which depression measure is most important.
86
How to fix this
1. Preregistration (commit ahead of time to what analyses you will do), reduce ambiguity 2. Transparency. Require authors to reveal their methodological details so peer review can work. For example, they must report everything they measured and every condition they ran Don’t be a jerk to people whose beliefs you are trying to change. If you think something is morally wrong, you will also be more likely to believe that it is ineffective Once people have publicly committed themselves to a position, they have been stripped of the capacity to be objective
87
Motivated reasoning takeaways
1. Motivated reasoning is a function of desire and ambiguity 2. Self serving bias need not reflect conscious corruption 3. We apply different standards to claims that we like (can I believe it) vs. claims that we dislike (must I believe it). Don’t be a jerk to people you’re trying to persuade 4. You can’t overcome unconscious self serving biased by increasing punishments for bias or rewards for objectivity. Rather you need to decrease desire or ambiguity or both.
88
Are you smarter than a television pundit reading | Delusions of success: how optimism undermines executives decisions reading
``` 1. Know how to think like a fox - Lot of research and scrappy learning. Lot of pundits are wrong, and they’re usually hedgehogs making bold predictions (type A personalities) 3 ideas for 538 founder: 1. Think probabilisticly 2. Can change forecasts 3. Look for consensus ``` 2. You overestimate and over optimistic. Anchoring, inside view, rose colored glasses, competitor neglect, emphasize positive, organizational pressure. Outside view is how others see it and you should use this, not inside view which has your bias expectations.
89
Illusion of explanatory depth
People asked to draw a bicycle, lot harder to do it than you think. Your rating of understanding drops after generating explanations. Illusion of explanatory depth: people feel they understand complex phenomena with far greater precision, coherence, and depth than they really do.
90
Confidence game
Game in class where guessing 90% confident intervals for a bunch of random questions. People often very far off in their 90% intervals.
91
Overconfidence and the 3 varieties
Unwarranted confidence, typically arising from many of the decision making errors we’ve discussed in this course. 1. Overplacement: you think you’re better than average 2. Overestimation: you think you’re better than you actually are (or that things are better than they actually are) 3. Overprecision: you think you know more than you do
92
Overplacement
Think you’re better than (identical) others Similar to competitor neglect. Think you’re good at easy stuff, think you’re less likely to have negative things than other people, especially those you can control Most people see themselves as better than average. Confidence in assessing risk: this is partly why simple informational campaigns do not eliminate risky behaviors. “Sure others might be at risk, but I’m not”
93
Overestimation and the planning fallacy
Overestimation is when you think you’re better than you actually are Planning fallacy is when people underestimate how long it will take them to complete tasks even when they know that such tasks usually run late. For ex, Sydney opera house original estimate was compelte in 1963 for $7M. Instead, scaled down version completed in 1973 for $102M See similar thing with asking students how long their projects will take. People think it takes less than it does; it winds up taking longer on average than the worst case prediction. Even when you recognize this, and they admit it usually takes longer to do projects, this planning fallacy still happens.
94
What causes the planning fallacy
1. It is often easy to imagine a positive scenario 2. We ask “how will I accomplish this” not “how wont I accomplish this” 3. Underestimate the probability of many consecutive high-probability steps 4. Fail to consider base rates 5. Consider past failures “unlucky,” “unusual,” or “irrelevant”
95
Inside view vs. outside view
Inside view (this time is different): focuses on task’s unique features, imagine how the event will unfold, if imaging is easy (and it usually is) then overconfidence! Outside view (what usually happens): focuses on similarity between this task and last tasks. Relies on base rates (what usually happens) to make predictions. More accurate planning.
96
Overprecision
People’s confidence intervals for their estimates of unknown quantities tend to be too narrow. We don’t appreciate how much we don’t know. We see this with the game show contestant, overconfident that he is right in the answer, and other contestant believes him, and they lose. Also see this with our 90% confidence intervals, not many contain the true value. Expertise: just because you know something in the field, you could be wrong! And you expect a lower margin of error than actually happens if you’re an expert, so your 90% confidence interval may be even worse! Even though expert, still wrong a lot We often interact with professionals who exercise their judgement with evident confidence, sometimes priding themselves on the power of their intuition. We cannot take expressions of high confidence at face value. People come up with coherent stories and confident predictions even when they know little or nothing. Overconfidence arises because people are often blind to their own blindness
97
Domain expertise vs prediction expertise
Knowledge in a domain does not easily, or often, translate into the ability to make accurate predictions of events that are ultimately uncertain. However it does translate into unwarranted confidence in this ability. Super forecasters are people who are found to be very good at making forecasts.
98
Why is learning not inevitable and what kind of feedback is required so we can learn
Experience is inevitable, but learning is not. And that’s because the real world often provides very noisy imprecise feedback. This allows us to interpret data the way that we want to interpret it, which reinforces our initial predictions. For example, group experiences together and not always right. Hedgehogs for example think they’re right, but just because of experience doesn’t mean you learn from that. Biases coming into play with feedback and interpreting data. For learning, feedback must be: 1. Precise: if feedback is ambiguous or noisy, it is difficult to learn the correct rule and easy to explain away failures 2. Timely: if feedback is not immediate, causal attributions become more difficult. 3. Repeated If someone is confidently telling you something, you can judge their accuracy based on what you know about the feedback they’ve gotten
99
Causes of overconfidence
1. Availability: we tend to imagine paths to success but not paths to failure: scenarios we don’t think of are not accounted for 2. Anchoring: we anchor on our best (optimistic) forecast and adjust insufficiently for realistic constraints 3. Failure to appreciate the role of chance: we tend to think the world is more controllable and predictable than it is. 4. Solo comparison effect: we tend to focus on our own abilities while neglecting the competition (for overplacement) 5. Motivated reasoning: we rationalize that success is likely and failure is not. Confidence is persuasive/rewarded
100
Costs and benefits of overconfidence:
Bad when predicting or planning for the future, deciding what to do, we need to be receptive to contrary evidence, others’ overconfidence persuades is, it leads us to pursue costly opportunities It is good when trying to motivate ourselves, when we are trying to be persuasive some of the time, when it leads us to peruse costless or smart opportunities. In general if you’re constantly overconfident that is bad
101
Sample exam question 1: Imagine a study with a research question: are men taller than woman and you want to make sure the finding replicated that men are taller than women A. Researcher A decides to randomly sample 1 man and 1 woman for this study. Would you bet a lot of money on this finding replicating in this study? B. Researcher B decides to randomly sample 1000 men and 1000 women for the study. Would you bet on this finding replicating in the study? C. Researcher A and B both run their studies 30 times. Which is more likely to have studied that result in women being taller than men
A. No, small sample size, so a lot more chance B. Yes, larger sample size, should mimic the population C. A. Small sample size so more likely to get women than the larger sample size as more chance in the smaller sample size.
102
2. Major of LA thinks too many firemen resulting in more da Angie. What’s the problem with this?
There is a third confounding variable here, as with larger fires, more firemen come, and these fires are more dangerous. So it isn’t the extra firemen causing it but rather it’s that the fires resulting in more firemen are inherently more dangerous.