Chapter 3: Learning and Memory Flashcards
(49 cards)
Habituation
Repeated exposure to the same stimulus can cause a decrease in response.
This is seen in many first-year medical students: students often have an intense physical reaction the first time they see a cadaver or treat a severe laceration, but as they get used to these stimuli, the reaction lessens until they are unbothered by these sights.
Dishabituation
Dishabituation is often noted when, late in the habituation of a stimulus, a second stimulus is presented. The second stimulus interrupts the habituation process and thereby causes an increase in response to the original stimulus.
What is associative learning? And what are the two types?
Associative learning is the creation of a pairing, or association, either between two stimuli or between a behavior and a response.
Classical and Operant
Classical Conditioning
Takes advantage of biological, instinctual responses to create associations between two unrelated stimuli.
Generally causes reflexive and innate responses.
ex: Pavlov Dog experiment
What kind of stimuli do not create a reflexive response?
Neutral stimuli.
Pavlov ex: ringing bell before training. it produces no response at first.
Unconditioned stimulus
Any stimulus that brings about a reflexive response is called an unconditioned stimulus.
Pavlov ex: Meat
Unconditioned response
The innate or reflexive response is called an unconditioned response.
Pavlov ex: salivation when meat is seen
Conditioned stimulus
A normally neutral stimulus that, through association, now causes a reflexive response.
Pavlov ex: Ringing bell after dogs have been trained.
Conditioned response
Pavlov ex: Dogs salivating after bell has been rung.
If the conditioned stimulus is presented without the unconditioned stimulus enough times what happens?
Extinction.
Generalization
Broadening effect by which a stimulus similar enough to the conditioned stimulus can also produce the conditioned response.
In one famous experiment, researchers conditioned a child called Little Albert to be afraid of a rat by pairing the presentation of the rat with a loud noise. Subsequent tests showed that Little Albert’s conditioning had generalized such that he also exhibited a fear response to a white stuffed rabbit, a white sealskin coat, and even a man with a white beard.
Discrimination
An organism learns to distinguish between two similar stimuli. This is the opposite of generalization.
Pavlov’s dogs could have been conditioned to discriminate between bells of different tones by having one tone paired with meat, and another presented without meat.
Operant Conditioning
Links voluntary behaviors with consequences in an effort to alter the frequency of those behaviors.
B.F. Skinner
The father of behaviorism, the theory that all behaviors are conditioned.
Reinforcement
Increases the likelihood of a behavior.
Punishment
Decreases the likelihood of a behavior.
Positive reinforcers
Increase a behavior by adding a positive consequence or incentive following the desired behavior.
Negative reinforcers
They increase the frequency of a behavior, but they do so by removing something unpleasant.
Avoidance Learning
Prevents the unpleasantness of something that has yet to happen.
Positive Punishment
Adds an unpleasant consequence in response to a behavior to reduce that behavior; for example, a thief may be arrested for stealing, which is intended to stop him from stealing again.
Negative Punishment
The reduction of a behavior when a stimulus is removed. For example, a parent may forbid her child from watching television as a consequence for bad behavior, with the goal of preventing the behavior from happening again.
Fixed-Ratio Schedule
Reinforce a behavior after a specific number of performances of that behavior. For example, in a typical operant conditioning experiment, researchers might reward a rat with a food pellet every third time it presses a bar in its cage.
Variable-Ratio Schedule
(Most effective!) Reinforce a behavior after a varying number of performances of the behavior, but such that the average number of performances to receive a reward is relatively constant.
With this type of reinforcement schedule, researchers might reward a rat first after two button presses, then eight, then four, then finally six.
Fixed-Interval Schedule
Reinforce the first instance of a behavior after a specified time period has elapsed.
For example, once our rat gets a pellet, it has to wait 60 seconds before it can get another pellet. The first lever press after 60 seconds gets a pellet, but presses during those 60 seconds accomplish nothing.