{ "@context": "https://schema.org", "@type": "Organization", "name": "Brainscape", "url": "https://www.brainscape.com/", "logo": "https://www.brainscape.com/pks/images/cms/public-views/shared/Brainscape-logo-c4e172b280b4616f7fda.svg", "sameAs": [ "https://www.facebook.com/Brainscape", "https://x.com/brainscape", "https://www.linkedin.com/company/brainscape", "https://www.instagram.com/brainscape/", "https://www.tiktok.com/@brainscapeu", "https://www.pinterest.com/brainscape/", "https://www.youtube.com/@BrainscapeNY" ], "contactPoint": { "@type": "ContactPoint", "telephone": "(929) 334-4005", "contactType": "customer service", "availableLanguage": ["English"] }, "founder": { "@type": "Person", "name": "Andrew Cohen" }, "description": "Brainscape’s spaced repetition system is proven to DOUBLE learning results! Find, make, and study flashcards online or in our mobile app. Serious learners only.", "address": { "@type": "PostalAddress", "streetAddress": "159 W 25th St, Ste 517", "addressLocality": "New York", "addressRegion": "NY", "postalCode": "10001", "addressCountry": "USA" } }

exam 2 Flashcards

(93 cards)

1
Q

what are the two possibilities of what is stored in learning

A

1) stimulus -> response association (habit)

2) stimulus -> stimulus association (expectation of the future)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what are the two possibilities of what is stored in learning in an example in terms of 1) T-> food and 2) stop sign -> brake

A

T -> food

1) S-R possibility: T -> salivation
2) S-S possibility: T -> food

stop sign -> brake

1) S-R: stop sign -> “hit brake”
2) S-S: stop sign -> cars coming, could hurt people, etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

why can dogs learn the L-T association (evidence for S-S learning)

A

sensory preconditioning: learning the relationship between two sensory things before ever presenting a US

dogs are not blind reflex beasts they can look forward to future events and learn L-T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

why can pigeons learn the different color light associations (evidence for S-S learning)

A

US devaluation study: taking away the value of something to reveal the understanding of two sensory stimuli associations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is natural to think happens during extinction in terms of T-food

A

1) T -> food
2) T -> nothing
so people think that T does not = food (loss of association/connection)

THIS IS WRONG

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what are the examples of evidence that show associations survive extinction

A

1) spontaneous recovery: response comes back after time apart
2) reinstatement: adding the US back in after some time
3) disinhibition: changing the setting
4) renewal: changing the context

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what are the two theories that answer the question: if extinction doesn’t erase CS-US association what does it do?

A

1) loss of attention to the CS, in other words habituation, there is recovery of attention when aroused (association is never broken)

T-food | T-nothing | food | T?
reminds you of last time you got food or arouses you

2) competing memory theory (two memories stored at once)

T-food & T-nothing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what are some psychopathies attributed to classical conditioning

A

1) specific phobias: something happens to you and you’re always afraid of that thing
2) relapse after drug use: you previously associated certain CS with drugs, then you go through recovery, after seeing CS you relapse
3) child sexual abuse: most predators suffered abuse too
4) PTSD: certain CS remind you of US
5) mood disorders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what are the conditioned based treatments for drug relapse

A

1) lab extinction
2) give med that blocks drug addiction
3) aversion therapy
4) substitution therapies
5) teach new competing CRs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what are the problems with lab extinction

A

responses could recover, real world is not context of extinction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what are the problems with giving a med that blocks drug addiction

A

when people got an implantable naltrexone pellet they try to take it out themselves

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is aversion therapy

A

instead of drug cues -> nothing, drug cues -> bad things

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what are some examples of aversion therapy and some problems

A

1) cigarette cessation: sit in room filled with cigarette stuff and smoke until sick
Problems: people have to volunteer for this, real world is also not context of extinction

2) alcohol abuse: antabuse, a chemical that reacts with alcohol to make you nauseous
problems: you have to voluntarily take the tablet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is the substitution theory

A

CS-> crave, so take a different US

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what are some examples and problems with substitution theory

A

1) heroin: methadone is an opiot like heroin but less powerful
2) nicotine: chantix, some action on receptor site as nicotine but is less powerful

problems: don’t really get rid of addiction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what are the three ways to teach new competing CR’s

A

1) behavior alternative: teach things to do in response to craving a drug (crave and run)
2) imagery: teach that every time you think of a cigarette you think of diseased lungs and people
3) waiting: want cigarette, but wait 5 minutes, then wait 5 more to eventually thin out use

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

what are the two theories that can explain the translation of knowledge into behavior

A

1) stimulus substitution

2) CS allows you to adapt or prepare for the US

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what is stimulus substitution

A

CS just “stands in” for US, causes same response

ex. T-> food -> salivate

T-> (T) -> salivate (tone takes place of food)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

what are the examples of CS allowing you to adapt or prepare for the US (CR’s help you prepare)

A

1) fighting fish (show on paper)
2) conditioned mating (show on paper)
3) conditioned snuggling (show on paper)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

what is compensatory conditioning

A

form of the conditioned response, the CR is adaptive
shows association is made (drug opposite response)
homeostasis: maintaining constant internal conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what is an example of conditioned opposite response in Pavlov with dogs and with drinking in the real world

A

show on paper

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what are CRs technically

A

the “withdraw symptoms”
ex. a hangover is a mini withdraw from alcohol and that is why drinking the next morning makes you feel better

withdrawal systems are the exact opposite of the drug symptoms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

what is tolerance

A

tolerance = UR + CR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

what is operant conditioning and how is it different than classical conditioning

A

classical: stimulus -> stimulus association
operant: réponse -> stimulus association, your response or behavior is the cause of a stimulus coming (your behavior controls the outcome)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
# define positive, negative, reinforcement, punishment (will be on test!!!!)
positive: behavior making something more likely to happen negative: behavior makes something less likely to happen reinforcement: anything you do to make response go up punishment: anything you do to make response go down
26
what are examples of all of the types of reinforcements and punishments in terms of rats BP for food
1) positive reinforcement: BP -> food 2) positive punishment: BP -> shock 3) negative punishment: free food, BP -> no food 4) negative reinforcement: free shock, BP -> turns shock off
27
what are real life examples of all the types of reinforcements and punishments
1) positive reinforcement: good behavior -> cookie 2) positive punishment: misbehave -> yell 3) negative punishment: bad behavior -> take phone away 4) negative reinforcement: escape: bad date, safety call -> leave, so you set up safety call next time avoidance: ask friends about boy, they say no -> avoid bad date
28
what are the techniques for studying instrumental conditioning
1) discrete trial: counting trial 1, trial 2... | 2) free operant techniques: rat BP, pigeon pecking at light
29
what are some examples of discrete trials
1) straight runway: rat is in a straight runway with hidden food at the other side, collecting your data by measuring speed or time to get to end 2) T-maze: rat is in a T shaped maze with food in one of the corners, collecting your data by looking at where the rat turns, % of correct turns
30
what are the pros of using free operant techniques (Skinner)
1) more applicable to scenarios in real life, the species can make choice freely 2) define behavior base don how it operates in the environment, behavior is defined by the effect it has on the world (outcome)
31
how do you get the rat to press the first time?
reward the rat when he does behaviors close to the desired behavior (shaping by successive approximations) ex. facing food (reward), walking towards food (reward), sniffing level (reward) etc.
32
what is a real life example of operant responses
the tenure system when you're done grad school you need publications on your resume so you do small, safe studies not big risky ones this is an operant contingency
33
what are the two developed theories to explain reinforcements (neither of these are right)
drive reduction theory: reinforcers are events that satisfy biological needs/drives (hunger, thirst, sexual arousal, pain avoidance) or the "pleasure theory" that says reinforcements are stimuli; released dopamine in the "pleasure center"
34
what is the study that disproves the drive reduction theory
if reinforcers are events that satisfy needs, punishers are events that make needs worse put male rat in straight line cage with receptive female rat on the other side, have the rat run across and start mating with the female but take him out in the middle of mating drive reduction theory says that the rat will eventually stop running but he actually runs faster sometimes increases in drive states are rewarding (drive reduction is not right)
35
what are some examples of studies that disprove the "pleasure theory"
problem with "pleasure theory" is sensory reinforcement 1) monkeys press button to open a shade and look at lab even though its not pleasurable 2) rats will press a bar to run through an empty maze, no biological pleasure 3) infants will suck a pacifier harder to view vacation slides which are pleasurable 4) humans will spend money to solve crosswords or go to horror films, none of these are pleasurable
36
what does the relative reinforcer theory (Premack Principle) consider compared to the dual process and pleasure theory
considers preferences between events BP -> food -given choice between bar pressing and eating, rats would pick eating -bar pressing is less probable behavior and eating food is more probable behavior -so reinforcement is when something you're less likely to do leads to something you're more likely to do -punishments are when something you're more likely to do leads to sometimes you're less likely to do (BP -> shock)
37
what is an example of the relative reinforcer theory in rats choosing eating > running > grooming
1) teach rat grooming (less) -> running (more), this is reinforcement so grooming will go up 2) teach rat eating (more) -> running (less), this is punishment so eating will go down
38
what is a key principle of the relative reinforcer theory
the same event can be either reinforcing or punishing (depends on the situation)
39
what are some examples of the relative reinforcer theory in real life
1) reinforcing autistic children to complete math problems complete math problems (less) -> cookies (more) * *in order to find out what kids like to be used as reinforcer do a "free observation"** ``` 2) training preschoolers to sit still sit still (less) -> run and scream like savage (more) ```
40
how does social comparison influence the reward value of something
expectation: its not what you get its what you get compared to what you expect this is the intrinsic nature of all creatures to react this way to reward
41
what is a humans study that shows the effect of social comparison
humans placed in FMRI (measures brain activity), they were tasked with counting how many dots were on a screen and told another person is doing the same thing. a participant gets told how well both of you are doing and everyone gets paid different amounts for performance (even though everyone gets paid the same) results: biology of reward, when someone did better than the other person BIG increase in pleasure center, small increase for same completion, and decrease in pleasure center when someone did worse than the other person
42
what is a monkeys study that shows the effect of social comparison
monkeys give stone to experimenter to get food when both monkeys get a cucumber they are equally happy when one monkey gets a cucumber and the other one gets a grape the cucumber monkey is not happy at all and throws the cucumber at experimenter
43
what do people use in the real world to try and guide social behavior and what are the problems with it
punishment 1) creates anger/ aggression 2) hard to deliver on-time punishment (if you want behavior to be controlled by punishment, the punishment needs to happen right away) 3) promotes punishment- avoidance not behavior of interest (ex. not picking up dog poop when its dark so no one can see)
44
how can we use reinforcement to guide social behavior (and examples)
sensory reinforcement Volkswagen Fun Theory 1) walking up stairs: make the stairs "piano" and people will take the stairs more than escalator 2) using the bottle bank (recycling): made it look like arcade game and people get points for putting in bottles 3) not littering in park: made the trash can sound extremely deep, people threw out trash and began to pick up other peoples trash Cancun 1) underwater museum, people go underwater to look at the sculptures rather than go to the coral reef and damage it
45
what is a condition for learning choice
timing delay between response and reward, long delays reduce the effectiveness of reward
46
show the experiment of the pigeons with the timing delay between response and reward
show experiment on paper if pigeons were rational creatures they would choose the light where out of 8 seconds, 4 seconds were food compared to 2 seconds, but they choose the other light because the initial delay affects their ability to make choices
47
why does initial delay affect the ability to make choices (2 wrong reasons, 1 right)
1) maybe associated the food with something in the four second delay 2) maybe forgot they pecked green, STM right: 3) delay discounting: rewards in the future have less value
48
what are the two ways to bridge the delay gap in delay discounting
conditioned reinforcement and immediacy
49
what is an example of a conditioned reinforcer when it comes to rats and BP
L-> food | BP -> food OR BP-> light both of these will work because L is the conditioned reinforcer (it gets its value by being associated with food) money is the conditioned reinforcer in humans
50
what is an example of conditioned reinforcement and immediacy in the real world (Toyota Prius (hybrid))
Prius -> pay 5k more -----> better world, fuel savings Corolla-> pay 5k less -----> worse pollution, spend more conditioned reinforcement or social reward: Toyota gave Prius social reward so that people would buy it right away even though the positive effects (better world, fuel saving) are not seen right away (eg. distinct shape and says HYBRID) immediate feedback: car shows MPG depending on MPH, so get rewarded for behavior of good driving (braking, accelerating slower, etc)
51
so, what are the two conditions in order for bridging the delay gap to be successful
1) delay: responses need immediate consequences | 2) pattern of reward that comes after response
52
what are the two kinds of schedules of reinforcement
ratio schedules: theres a direct relationship between number of responses and number of rewards interval schedules: reward available only after time passes
53
what are the two kinds of ratio schedules
1) fixed ratio: constant or fixed number of responses = 1 reward ex. FR1 = 1 behavior = 1 reward 2) variable ratio: theres an average # of responses needed to get a reward, but changes trial to trail ex. VR50 = average of 50 behaviors needed for 1 reward
54
explain fixed ratio schedule
draw on paper post reinforcement pause: rats don't feel like continuously lever pressing so they'll take breaks before next food pellet (same with humans when they have four papers due) FR schedules are used to test strength of drug reward ex. using drug as reward for BP, slowly increase FR requirement and ask what is the highest ratio you get to before the animal gives up
55
explain variable ratio schedule
draw on paper high rates of responses that are continuous, instead of knowing FR50 the reward could be right there so the rats are more motivated to do behavior applicable to humans using slot machines because the reward could be right there
56
what are the two kinds of interval schedules
1) fixed interval: the amount of time that passes is always the same from reward to reward 2) variable interval: on average some amount time has to pass but it changes reward to reward
57
explain fixed interval schedule
draw on paper less responding overall (compared to FR and and VR) because theres no relationship between response and reward lever pressing increases as it gets close to one minute (FI scallop) ex. us in class after test, don't study right away until it gets close to test
58
explain variable interval schedule
draw on paper unsure when the reward is, it could be right there so they're more motivated to do behavior applicable to us because of pop quizzes
59
how are the interval schedules related to money
interval schedule: salary (biweekly, monthly) | ratio schedule: commission
60
what are the three principles of schedules of reinforcement
1) more responding on ratio than interval 2) fixed schedules produce pauses, variable schedules produce continuous behavior 3) interval schedules are not the same as delay to reward BP -> (30 sec) food (delay to reward) time between rewards but BP-> food immediately (interval schedules)
61
what is partial reinforcement
all schedules represent it except for FR1 ex. FR1 BP -> food FR10 BP -> food then, switch everyone to extinction FR1 notices change immediately FR10 doesn't notice extinction until 10th BP ** in general, partial reinforcement makes you resistant to extinction (maintain long term behavior)**
62
what is the matching law for quantity
your amount of behavior should match the amount of reward in other words, if two options A + B on interval schedules then % responses to A = % reward available on A (straight line graph)
63
what is an example of the matching law for quantity in real life
reduce teen drug use or pregnancy create other options to draw people away from dugs and sex -> after school programs, every hour spend on B is not spent on A
64
what is the matching law for delay
longer delay matched with less behavior and shorter delay matched with more behavior creatures should match behavior to the inverse of delay (3x delay = 1/3 behavior) in essence creatures are maximizing reward/unit time (pay rate)
65
how does the matching law explain the pigeons original choice and then their preference reversal after added 10 second delay
original: matching law says this happened because of the minimal delay, 2 seconds with no delay at all feels better than 4 seconds delay then 4 seconds food preference reversal: matching law says this happened because their perception of pay rate has changed with the longer added delay
66
how to get people to save $ for retirement
1) provide immediate reward: positive reinforcement (if i put in 6%, Arcadia puts in 8%) & negative reinforcement (money goes in pretax so you avoid tax) 2) precommitment: choose to put the money aside at the beginning of the year through automatic deduction
67
how to use precommitment to save money for retirement
show experiment on paper | social security system takes out 6% of gross pay and goes to government, given back to people when they retire
68
explain optimal foraging research in terms of VI and VR intervals
foraging is searching for resources in a VI VI schedule it pays to switch back and forth but in a VR VR schedule its better to stay with the better alternative (exclusive choice for lower ratio)
69
demonstrate optimal foraging in terms of the ideal free duck study
draw study on paper evolution should have built creatures that maximize energy/time two people throw same size bread in a pond at 10/min and 5/min ratios and the prediction is that groups of ducks should be allocated in ratio of those rewards results: the ducks match as group reward rates (22 & 11) within two minutes and when the ratios switch sides the ducks switch sides too
70
what was the second study with the ducks where the bread is different sizes at the same throwing rate as experiment 1
same throwing rate but 2:1 ratio of bread sizes step 1: ducks first allocated 50/50 step 2: ducks later reallocated 2/3:1/3 **ducks first pay attention to throwing rate but then they look at food/ unit time and can readjust**
71
what are the questions and answers squirrels ask when foraging in "patchy" environments (for nuts)
1) which patch should i enter next? - the one that has the higher than average return rate 2) when I'm in a patch when should i leave? - when expected return drops below average
72
draw the guppy experiment and explain it
draw it on paper there are two foods to choose from (ratios in calories), the food ratios flip and then the # of guppies flip there are some guppies who only eat worms and some who only eat fruit flies, so they don't switch -these guppies with food preferences gain less weight because they give up going to the higher return weight
73
explain the sunfish on lake Michigan study
sunfish eat vegetation mats and plankton early summer: vegetation > plankton late summer: plankton > vegetation in general, sunfish eat more vegetation in early summer and more plankton in late summer but... large sunfish change preference, small sunfish stay with vegetation for predator avoidance (big fish get bigger because they can change to the optimal choice, small fish cannot)
74
what is the endowment effect and the hoodie example
people place more value on things they already own people who own the hoodie would sell it for $45 people who just picture a random hoodie would sell it for $30
75
what is the relationship between optimal foraging and travel time
a patch is depleting scenario 1: next patch is far away scenario 2: next patch is close by the closer the next patch the sooner you should leave the current patch
76
explain the bees foraging in a "patchy" meadow experiment
draw on paper two meadows one with massed flowers one with spaced flowers you can predict how much nectar bees will leave per flower if you know things about bees in the meadow where the flowers are close bees leave more nectar in the flower compared to where the flowers are spaced far apart this is because its harder to get the last bit of nectar out so the bees won't suck all of it out they'll just move to next flower
77
what are the three important takeaways (components) from choice
1) reward amount (matching law for quantity) 2) reward delay (matching law for delay) 3) expected alternatives (decisions made by present self are not the same as decisions made for your future self)
78
what are the two sides of 20th century economics
1) people are selfish and competitive: humans are built to maximize reward/time, so people work harder in capitalist societies because they're working for themselves 2) selfishness is a product of capitalism and can be trained away... "ideal socialist man": people trained to no longer work for themselves but for society
79
show what happened in USSR when the state was trying to nationalize livestock
1) turn cows to state -> nothing -> small unknown reward 2) try to keep cows -> nothing & punishment -> small unknown reward 3) kill all animals -> food and $ now -> everyone starves later people chose this option this caused Lenin to create the New Economic Policy: people chose what they did because of the concept of reward (immediate reward)
80
whats an example in the USSR that shows how people are more willing to work for themselves
97% of land was held by government and 3% was allocated to private farming 25% of agricultural output came from private farming because people worked hard for the immediate reward, whereas in land owned by government people got nothing
81
what are the four aspects of behavioral economics
1) open vs. closed economies 2) elastic v. inelastic rewards/ goods 3) effect of substitutes 4) effect of constraints on wealth/income
82
what are open v. closed economies
closed: no source of reward other than what you earn open: there are sources of reward you don't work for (unemployment insurance, welfare, etc) the more closed a system, the harder one should work for reward
83
show open v. closed economies with rats on FR schedule for food
if open: breaking point = FR50 (rate of responding decreases) if closed: breaking point = FR250
84
explain elastic v. inelastic rewards/goods and give an example
1) inelastic: as price changes, demand does not ex. gas, food in closed economy 2) elastic: as price goes up, you consume less ex. luxury items in open economy (clothes, car, etc.)
85
explain elastic v inelastic goods in terms of rats pressing for food or for stimulation of pleasure center
(closed) press left-> stimulation of pleasure center (open) press right -> food people used to say that stimulation > food, but this is not true because the food is in an open economy to make this study better: make both economies closed increase price (FR) rats do end up favoring food, we just did this experiment wrong
86
are drugs inelastic or elastic
government policy acts as though they are elastic, but they are actually inelastic
87
what is the effect of substitutes
lack of substitutes produces inelasticity
88
show the effect of substitutes in rat experiment
press R -> water press L -> food slowly raise the FR requirement for food, but because food is inelastic (no substitutes) they will never stop working for it press R-> root beer press L -> Tom Collins mix slowly raise the FR requirement for Tom Collins, but because there is a substitute they'll stop working for it
89
show the effect of substitutes in terms of buying cigarettes in Philly
buy in Philly --(price & tax)--> cigs buy suburbs --(price)--> cigs people will switch and buy suburbs because its cheaper (those who can afford transportation) so cigs are elastic but for people who are poor and cant afford transportation they have to buy cigs in city so they're inelastic
90
show the example of heroin in humans (partial substitute and perfect substitute)
on paper partial: methadone some people switch to methadone but others don't perfect: legal heroin some countries have done this
91
what are the effects of constraints on wealth/ income
income constraints should reveal elasticity | at high incomes, everything is inelastic but at low incomes elasticity is revealed
92
show the effects of constrains on wealth/ income in terms of the rat experiment with food and pleasure center stimulation
press R-> food press L-> pleasure center stimulation when you can press as much as you like the pleasure center is more inelastic, but when presses are restricted rats switch to food the inelasticity of food is revealed by: 1) raising price (FR) 2) reducing income (presses allowed)
93
explain loss aversion and the control of behavior and an example in terms of using plastic bags
consider whether you want positive or negative reinforcement loss aversion is when losses of size x hurt more than gains of size x feel good use plastic -> nothing use own bag -> 25 cent reward use plastic -> charged 25 cents use own bag -> nothing the latter is more powerful because losses loom larger than gains