Cognitive [Learning, Language, Memory, Thinking] (Psychology Subject) Flashcards

1
Q

Learning

A

*the relatively permanent or stable change in behavior as the result of experience

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Classical conditioning (associative learning)

A
  • Ivan Pavlov; Pavlovian conditioning
  • pairing a neutral stimulus with a not-so-neutral stimulus; this creates a relationship between the two
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Unconditioned Stimulus (UCS)

A

*not-so-neutral stimulus
- in Pavlov’s dog experiments, the UCS is the food
— without conditioning, the stimulus elicits the response of salivating
- unconditioned because they don’t have to be learned
— reflexive or instinctual behaviors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Unconditioned Response (UCR)

A

*naturally occurring response to the UCS
- in Pavlov’s dog experiment, it was salivation in response to the food
- unconditioned because they don’t have to be learned
— reflexive or instinctual behaviors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Neutral Stimulus (NS)

A

*a stimulus that doesn’t produce a specific response on its own
- In Pavlov’s dog experiment, this was the light/bell before he conditioned a response to it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Conditioned Stimulus (CS)

A

*the neutral stimulus once it’s been paired with the UCS
- has no naturally occurring response, but it’s conditioned through pairings with a UCS
- in Pavlov’s dog experiment, the CS (the light) is paired with the UCS (food), so that the CS alone will produce a response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Conditioned Response (CR)

A

*the response that the CS elicits after conditioning
- the UCR and the CR are the same (i.e., salivating to food or a light)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Simultaneous Conditioning

A

*the UCS and NS are presented at the same time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Higher-order conditioning/second-order conditioning

A

*a conditioning technique in which a previous CS now acts as a UCS
- in Pavlov’s dog experiment, the experimenter would use the light as a UCS after the light reliably elicited saliva in the dogs; food is no longer used; light could be paired with a bell (CS) until the bell alone elicited saliva in the dogs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Forward conditioning

A

*pairing of the NS and the UCS in which the NS is presented before thee UCS
- two types:
— delayed conditioning
— trace conditioning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Delayed conditioning

A

*the presentation of the NS begins before that of the UCS and lasts until the UCS is presented

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Trace conditioning

A

*the NS is presented and terminated before the UCS is presented

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Backward conditioning

A

*the NS is presented after the UCS is presented
- in Pavlov’s dog experiment, they would have been presented with the food and then with the light
- proven ineffective
- only accomplishes inhibitory conditioning (later the dogs would have a harder time pairing the light and food even if they were presented in a forward fashion)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Taste aversion learning

A

*occurs when food or drink becomes associated with an aversive stimulus (nausea, vomiting), even if the food or drink itself didn’t actually cause the nausea/vomiting
- type of classical conditioning but differs in:
— the response usually takes one pairing vs. longer acquisition
— the response takes a long time to extinguish vs. beginning when you remove the UCS
- evolutionarily adaptive so human/animal doesn’t eat poisonous food and die

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Law of effect

A

*a cause-and-effect chain of behavior revolving around reinforcement
- E. L. Thorndike
- precursor of operant conditioning
- “connectionism” because learning occurs through formation of connections between stimuli and responses
- “Puzzle Box” experiment (cats learning complex tasks through trial and error)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Theory of association

A

*grouping things together based on the fact that they occur together in time and space
— organisms associate certain behaviors with certain rewards and certain cues with certain situations
- Kurt Lewin
- forerunner of behaviorism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

School of behaviorism

A
  • John B. Watson
  • everything could be explained by stimulus-response chains
    — conditioning was key factor in developing these chains
  • only objective and observable elements were of importance to organisms and psychology
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Hypothetico-deductive model

A

*designed to dry and deduce logically all the rules that govern behavior
- Clark Hull
— created equation involving input variables leading to output variables; included intervening variables in between that’d change the outcomes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Radical behaviorism

A

*school of thought where it’s believed that all behavior, animal, and human, can be explained in terms of stimuli and responses, or reinforcements and punishments
- no allowances for how thoughts/feeling might factor into the equation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Operant conditioning (associative learning)

A

*aims to influence a response through various reinforcement strategies
- B. F. Skinner
- Skinner Box (rats repeated behaviors that won rewards and gave up on behaviors that didn’t)
— shaping (differential reinforcement of successive approximations) process rewarded rats with food pellets when near the lever and after touching lever
- also known as instrumental conditioning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Primary reinforcement

A

*a natural reinforcement; reinforcing on its own without the requirement of learning
- i.e., food

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Secondary reinforcement

A

*a learned reinforcer
- i.e., money, prestige, verbal praise, awards
- often learned through society
- instrumental in token economies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Positive reinforcement

A

*adding something desirable to increase likelihood of a particular response
- some subjects are not motivated by rewards because they don’t believe/understand that the rewards will be given

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Negative reinforcement

A

*reinforcement through the removal of a negative event
- i.e., taking away something undesirable to increase the likelihood of a particular behavior
- NOT punishment/delivery of a negative consequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Continuous reinforcement schedule
*every correct response is met with some form of reinforcement - facilitates the quickest learning, but also the most fragile learning; as soon as rewards halt, the animal stops performing
26
Partial reinforcement schedules
*not all correct responses are reinforced - may require longer learning time, but once learned, behaviors are more resistant to extinction - types: — fixed ratio schedule — variable ratio schedule — fixed interval schedule — variable interval schedule
27
Partial reinforcement schedules
*not all correct responses are reinforced - may require longer learning time, but once learned, behaviors are more resistant to extinction - types: — fixed ratio schedule — variable ratio schedule — fixed interval schedule — variable interval schedule
28
Fixed ratio schedule
*reinforcement delivered after a consistent # of responses - ratio of 6:1 –> every 6 correct responses = 1 reward
29
Variable ratio schedule
*reinforcements delivered after different #s of correct responses - ratio can’t be predicted - learning less likely to become extinguished - one performs a behavior not because it’s been rewarded but rather cause it COULD be rewarded on the next try
30
Fixed interval schedule
*rewards come after the passage of a certain period of time rather than the # of behaviors - i.e., if fixed interval is 5 minutes, then rat will be rewarded the first time it presses the lever after a 5-minute period has elapsed, regardless of what it did during the preceding 5 minutes
31
Variable interval schedule
*rewards are delivered after differing time periods - second most effective strategy in maintaining behavior; length of time varies, so one never knows when the reinforcement is just around the corner - slow and steady learning
32
Token economy
*artificial mini-economy usually found in prisons, rehabilitation centers, or mental hospitals - individuals in the environment are motivated by secondary reinforcers (tokens) - desirable behaviors reinforced with tokens, which can be cashed in for more desirable reinforcers (i.e., candy, books, privileges, cigarettes)
33
Stimulus
*any event that an organism reacts to - first link in a stimulus-response chain
34
Stimulus discrimination
*the ability to discriminate between different but similar stimuli - i.e., doorbell ringing vs. phone ringing
35
Stimulus generalization
*make the same response to a group of similar stimuli; opposite of stimulus discrimination - i.e., not all fire alarms sound alike, but they all require the same response
36
Stimulus generalization
*make the same response to a group of similar stimuli; opposite of stimulus discrimination - i.e., not all fire alarms sound alike, but they all require the same response
37
Undergeneralization
*failure to generalize a stimulus
38
Response learning
*form of learning in which one links together chains of stimuli and responses - one learns what to do in response to particular triggers - i.e., leaving a building in response to a fire alarm
39
Response learning
*form of learning in which one links together chains of stimuli and responses - one learns what to do in response to particular triggers - i.e., leaving a building in response to a fire alarm
40
Aversive conditioning
*uses punishment to decrease the likelihood of a behavior - i.e., drug antabuse used to treat alcoholism
41
Avoidance conditioning
*occurs when you avoid a predictable, unpleasant stimulus - teaches an animal how to avoid something the animal doesn’t want
42
Escape conditioning
*occurs when you have to escape an unpredictable, unpleasant stimulus - teaches an animal to perform a desired behavior to get away from a negative stimulus
43
Punishment
- promotes extinction of an undesirable behavior; after unwanted behavior is performed, punishment is presented - acts as a negative stimulus, which should decrease the likelihood that the earlier behavior will be repeated - positive punishment: addition of something undesirable to the situation to discourage a particular behavior - negative punishment: taking away something desirable to discourage a particular behavior - primary punishment: most species don’t have to learn about its unpleasant consequences - secondary punishment: one must come to understand as a negative consequence - Skinner preferred to extinguish behavior by stopping reinforcement as opposed to applying a punishment
44
Autonomic conditioning
*evoking responses of the autonomic nervous system through training
45
Extinction
*reversal of conditioning; goal to encourage an organism to stop doing a particular behavior - accomplished by repeatedly withholding reinforcement for a behavior or by disassociating the behavior from a particular cue - in classical conditioning, extinction begins the moment the UCS and NS are no longer paired - in operant conditioning, one might see an extinction burst (behavior initially increases before it begins to diminish)
46
Spontaneous recovery
*reappearance of an extinguished response, even in the absence of further conditioning or training
47
Superstitious behavior
*occurs when someone “learns” that a specific action causes an event, when in reality the two are unrelated - i.e., a football fan wearing the shirt during a game that they happen to wear every time their team wins
48
Chaining
*act of linking together a series of behaviors that ultimately result in reinforcement - one behavior triggers the next, and so on - i.e., learning the alphabet
49
Autoshaping
*an apparatus allows an animal to control its reinforcements through behaviors - i.e., bar pressing or key pecking - animal is shaping its own behavior
50
Overshadowing
*an animal’s inability to infer a relationship between a particular stimulus and response due to the presence of a more prominent stimulus - a classical conditioning concept
51
John Garcia
- discovered that animals are programmed through evolution to make certain connections - preparedness: certain associations are learned more easily than others - studied “conditioned nausea” with rats and found that invariably nausea was perceived to be connected with food or drink — unable to condition a relationship between nausea and a NS (i.e., a light) - Garcia effect — explains why humans can become sick only one time from eating a particular food and are never able to eat that food again — connection is automatic, needs little conditioning
52
Habituation (nonassociative learning)
*decreased responsiveness to a stimulus as a result of increasing familiarity with the stimulus - i.e., entering a room with a buzzing light; you’re constantly aware of the noise until after a while, you stop noticing it
53
Dishabituation (nonassociative learning)
*when you remove the stimulus to which the organism had become habituated - if you reintroduce the stimulus, the organism will start noticing it again
54
Sensitization (nonassociative learning)
*increasing sensitivity to the environment following the presentation of a strong stimulus
55
Desensitization (nonassociative learning)
*decreasing sensitivity to the environment following the presentation of a strong stimulus - often used as a behavioral treatment to counter phobias
56
Social learning theory; social cognitive theory (observational learning)
*individuals learn through their culture; what’s acceptable/unacceptable - Albert Bandura - developed to explain how we learn by modeling; we don’t need reinforcements/associations/practice to learn - Bobo doll study (children mirroring adults taking out their frustrations on a clown doll)
57
Vicarious reinforcement (observational learning)
*a person witnesses someone else being rewarded for a particular behavior so that encourages them to do the same
58
Vicarious punishment (observational learning)
*a person witnesses someone being punished for a behavior, and that discourages the likelihood of the witness engaging in that behavior
59
Insight learning
*when the solution to a problem appears all at once rather than building up to a solution - Wolfgang Kohler’s chimpanzee and banana experiment (used boxes to reach banana) - key element in Gestalt psychology because a person can perceive the relationships between all the important elements in a situation and finding a solution greater than the sum of its parts — Gestalt psychology describes how people organize elements in a situation and think about them in relation to one another
60
Latent learning
*learning that happens but does not demonstrate itself until it’s needed later on - i.e., watching someone play chess many times and playing chess later, realizing you’ve learned some new tricks - Edward C. Tolman and three rat groups experiment (quickly learning to run at the end of maze for food)
61
Incidental learning
*unrelated items are grouped together - like accidental learning - i.e., pets associating cars with vets - opposite of intentional learning
62
Donald Hebb
- created an early model of how learning happens in the brain, through formation of sets of neurons that learn to fire together
63
Perceptual/concept learning
*learning about something in general rather than learning-specific stimulus-response chains - individual learns about something (i.e., history) rather than any particular response - Tolman’s experiments with animals forming cognitive maps of mazes rather than simple escape routes; blocked routes led to internal sense of where the end was (purposive behavior)
64
Harry Harlow
- demonstrated that monkeys became better at learning tasks as they acquired different learning experiences - eventually, monkeys could learn after only one trial - “learning to learn”
65
Motivation and performance
*an animal must be motivated in order to learn and to act - individuals are at times motivated by primary or instinctual drives (hunger or thirst); other times motivated by secondary or acquired drives (money, other learned reinforcers) - exploratory drive may exist
66
Fritz Heider’s balance theory, Charles Osgood & Percy Tannenbaum’s congruity theory, Leon Festinger’s cognitive dissonance theory
- all agree that what drives people is a desire to be balanced with respect to their feelings, ideas, or behaviors - along with Clark Hull’s drive-reduction theory, these theories are called into question by the fact that individuals often seek out stimulation, novel experience, or self-destruction
67
Hull’s performance = drive x habitat
*individuals are first motivated by drive, and then act according to old successful habits - they’ll do what has worked previously to satisfy the drive
68
Edward Tolman’s performance = expectation x value
*people are motivated by goals that they think they might actually meet - another factor is how important the goal is - also the expectancy-value theory - Victor Vroom applied this theory to individual behavior in large organizations; those lowest on totem pole don’t expect to receive company incentives, therefore they do little to motivate them
69
Henry Murray and later David McClelland’s Need for achievement (nAch)
- may be manifested through a need to pursue success or a need to avoid failure; either way, the goal is to feel successful - John Atkinson suggested a theory of motivation in which people who set realistic goals with intermediate risk sets feel pride with accomplishment and want to succeed more than they fear failure - because success is so important, people are unlikely to set unrealistic/risky goals or to persist when success is unlikely
70
Neil Miller’s approach-avoidance conflict
*the state one feels when a certain goal has both pros and cons - the further one is from the goal, the more one focuses on the pros or the reasons to approach the goal - the closer one is to the goal, the more one focuses on the cons or the reasons to avoid the goal
71
Hedonism
*theory that individuals are motivated solely by what brings the most pleasure and the least pain
72
The Premack principle
*idea that people are motivated to do what they do not want to do by rewarding themselves afterward with something they like to do - i.e., child rewarded with dessert after they eat spinach
73
Abraham Maslow’s Hierarchy of Needs
*demonstrates physiological needs take precedence - once those are satisfied, a person will work to satisfy safety needs, followed by love and belonging needs, self-esteem needs, and finally the need to self-actualize
74
M. E. Olds
- performed experiments in which electrical stimulation of pleasure centers in the brain were used as positive reinforcement — animals would perform behaviors to receive stimulation - viewed as evidence against the drive-reduction theory
75
Arousal
*part of motivation; an individual must be adequately aroused to learn/perform - Donald Hebb postulated that a medium amount of arousal is best for performance — too little/too much could hamper performance of tasks - for simple tasks, optimal level of arousal is towards high end - for complex tasks, optimal level of arousal is towards low end - optimal arousal for any type of task is never at the extremes - Yerkes-Dodson effect - on a graph, optimal arousal appears as an inverted U-curve, with lowest performance at the extremes of arousal
76
State dependent learning
*concept that what a person learns in one state is best recalled in that state
77
Continuous and discrete motor tasks
- continuous is easier to learn than discrete - continuous task—riding a bicycle; one continuous motion that, once started, continues naturally - discrete task—setting up a chessboard; one that’s divided into different parts that don’t facilitate the recall of each other; placing pieces in proper positions involves different bits of information; not one unbroken task
78
Positive transfer
*previous learning that makes it easier to learn another task later - negative transfer: previous learning that makes it more difficult to learn a new task
79
Age
- affects learning - humans primed to learn between 3 and 20 - 20 to 50, ability to learn remains fairly constant - 50+, ability to learn drops
80
Learning curve
*when learning something new, the rate of learning changes over time - i.e., when learning a language someone may learn a bunch of vocabulary and basic sentence structure, but as they try to learn more complex grammatical constructions, rate of learning may decrease - Hermann Ebbinghaus - positively accelerated curve: rate of learning is increasing - negatively accelerated curve: rate of learning is decreasing
81
Educational psychology
*concerned with how people learn in educational settings - examine things like student and teacher attributes and instructional processes in the classroom - educational psychologists employed frequently by schools and help when students have academic/behavioral problems - Thorndike wrote first educational psychology textbook in 1903; developed methods to assess students’ skills and teaching effectiveness
82
Aptitude
*a set of characteristics that are indicative of a person’s ability to learn
83
Cooperative learning
*involves students working on a project together in small groups
84
Lev Vygotsky
- described learning through zone of proximal development; a lower achieving student in a particular subject is placed with someone who is just a bit more advanced; lower-achieving student thus raises their game through the interaction - scaffolding learning: occurs when a teacher encourages the student to learn independently and provides assistance only with topics/concepts that are beyond the student’s capability — as student continues to learn, the teacher aids with less to encourage independence - Vygotsky’s theories on education are used in classrooms worldwide
85
Language
*the meaningful arrangement of sounds
86
Psycholinguistics
*the study of the psychology of language
87
Phonemes
*discrete sounds that make up words but carry no meaning - i.e., ee, p, or sh - infants first make these sounds when learning language - phonics is learning to read by sounding out the phonemes - all words in a language are created from basic phonological rules of sound combinations
88
Morphemes
*made up of phonemes; smallest units of meaning in language - words/parts of words that have meaning are morphemes - i.e., the word boy and the suffix -ing
89
Phrase
*a group of words that when put together function as a single syntactic part of a sentence - i.e., “walking the dog” is a noun phrase that could function as the subject of a sentence if it were followed by a verb
90
Syntax
*the arrangement of words into sentences as prescribed by a particular language
91
Grammar
*the overall rules of the interrelationship between morphemes and syntax that make up a certain language
92
Morphology or morphological rules
*grammar rules; how to group morphemes
93
Prosody
*tone inflections, accents, and other aspects of pronunciation that carry meaning - is the icing on the cake of grammar and meaning - infants can more easily differentiate between completely different sounds than between different expressions of the same sound
94
Phonology
*the study of sound patterns in languages
95
Semantics
*the study of how signs and symbols are interpreted to make meaning