Chapter 8 & 9 Study Guide Flashcards

1
Q
  1. The vertical displacement on a cumulative record represents 
p. 121

a. the cumulative number of responses. 
b. the number of subjects responding.
c. the passage of time.
d. the schedule of reinforcement.

A

a. the cumulative number of responses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. The horizontal displacement on a cumulative record represents 
p. 121

a. the cumulative number of responses.
b. the number of subjects responding. 
c. the passage of time.
d. the schedule of reinforcement.

A

c. the passage of time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

If a rat responded at a high rate, then stopped responding for a period of time, and then responded at a low rate, the cumulative record would have the following

a. A horizontal line followed by a shallow slope followed by a steep slope.
b. A shallow slope followed by a horizontal line followed by a steep slope. 
c. A steep slope followed by a horizontal line followed by a shallow slope.
d. A steep slope followed by a shallow slope followed by a horizontal line.

A

c. A steep slope followed by a horizontal line followed by a shallow slope.
P. 121

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  1. If a pigeon receives reinforcement for every 5 key pecks, the reinforcement schedule in effect is a
    p. 122
    a. variable ratio.
    b. fixed ratio.
    c. fixed interval.
    
d. variable interval.
A

b. fixed ratio.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  1. If reinforcement of a particular response depends on how much time has passed since the last reinforcer, the procedure is called
    p. 123

a. a ratio schedule.
b. an interval schedule. 
c. a ratio run.
d. a timed schedule.

A

b. an interval schedule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  1. A fixed ratio schedule of reinforcement typically produces 
p. 122
    a. an unpredictable rate of responding.
b. a steady rate of responding with no predictable pauses.
c. alternations between high response rates and postreinforcement pauses.
d. None of the above are correct
A

c. alternations between high response rates and postreinforcement pauses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  1. Which of the following schedules of reinforcement would you expect to produce the longest ratio run?
    p. 122-123
    a. FR 2.
    
b. FR 10.
    
c. VR 7.
    
d. VR 20.
A

b. FR 10.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  1. A variable ratio schedule of reinforcement typically produces 
p. 123
    a. an unpredictable rate of responding.
    
b. a steady high rate of responding with no predictable pauses.
    
c. alternations between high response rates and post-reinforcement pauses.
    
d. None of the above are correct
A

b. a steady high rate of responding with no predictable pauses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  1. A fixed interval schedule of reinforcement typically produces 
p. 124-125

a. an unpredictable rate of responding.

b. a steady rate of responding with no predictable pauses.

c. alternations between high response rates and post-reinforcement pauses.

d. None of the above are correct

A

c. alternations between high response rates and post-reinforcement pauses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  1. A variable interval schedule of reinforcement typically produces 
p. 125
    a. an unpredictable rate of responding.
    
b. a steady low rate of responding with no predictable pauses.
    
c. alternations between high response rates and post-reinforcement pauses.
    
d. None of the above are correct
A

b. a steady low rate of responding with no predictable pauses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
  1. Which of the following schedules of reinforcement will produce the longest postreinforcement pause?
    p. 122-125

a. VR 10.

b. VR 50.

c. FR 10.

d. FR 40.

A

d. FR 40.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
  1. Which of the following schedules of reinforcement will produce a `scalloped’ cumulative record?
    p. 124-125

a. FI 5.

b. FR 1.

c. VR 5.
d. VI 5.

A

a. FI 5.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  1. Which of the following statements regarding heterogeneous and homogeneous chains is correct?
    p. 128-130
    a. A homogeneous chain is a chained schedule, but a heterogeneous chain is not really a chained schedule because it does not involve a sequence of responses.
    
b. A heterogeneous chain is a chained schedule, but a homogeneous chain is not really a chained schedule because it does not involve a sequence of responses.

    c. In a homogeneous chain, the various response components all involve the same response.
    d. In a heterogeneous chain, the various response components all involve the same response.
A

c. In a homogeneous chain, the various response components all involve the same response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  1. In order to get a food pellet, a hungry rat must first pull a chain once and then press a lever 10 times. This is an example of
    p. 129
    a. a heterogeneous chained schedule.
    
b. a homogenous chained schedule.
    c. a concurrent schedule of reinforcement.

    d. a randomly ordered schedule of reinforcement.
A

a. a heterogeneous chained schedule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  1. The technique used in the establishment of response chains where the last response or response component of the chain is taught first is called
    p. 131

a. a forward chaining procedure.
b. a backward chaining procedure.
c. a concurrent chaining procedure.
d. a feedback chaining procedure.

A

b. a backward chaining procedure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  1. The technique used in the establishment of response chains where the first response or response component of the chain is taught first is called
    p. 131
    a. a forward chaining procedure.
    
b. a backward chaining procedure.
    
c. a concurrent chaining procedure. 

    d. a feedback chaining procedure.
A

a. a forward chaining procedure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q
  1. Experiments have demonstrated that 
p. 131

a. response chains cannot be learned using a backward chaining procedure. 

b. response chains cannot be learned using a forward chaining procedure
c. response chains can be learned using either a forward chaining procedure or a backward chaining procedure.
d. the establishment of response chains is difficult when either a forward chaining or a backward chaining procedure is used.

A

c. response chains can be learned using either a forward chaining procedure or a backward chaining procedure.

18
Q
  1. Which of the following schedules of reinforcement is best suited for the investigation of choice behavior?
    p. 132
    a. Homogeneous. 
b. Heterogeneous. 
c. Chained.
    d. Concurrent.
A

d. Concurrent

19
Q
  1. The relative rate of responding on a response alternative is equal to the relative rate of reinforcement obtained with that response alternative. This is called the
    p. 133
    a. Premack principle.
    
b. Law of Effect.
    c. matching law.
    
d. post-reinforcement rule.
A

c. matching law

20
Q
  1. In his `Law of Effect’, Thorndike proposed that 
p. 137-138
    a. the stimulus produces an association between the instrumental response and the reinforcer.
    b. the reinforcer produces an S-R association.
    
c. instrumental conditioning is the result of the formation of an S-S association.
    
d. a stimulus does not become associated with the instrumental response.
A

b. the reinforcer produces an S-R association.

21
Q
  1. Who is best associated with drive reduction theory? 
p. 139
    a. Thorndike.
    
b. Skinner.
    c. Premack.
    
d. Hull.
A

d. Hull

22
Q
  1. Which of the following theories of reinforcement makes use of the concept of homeostasis?
    p. 139

a. The Premack principle.

b. The Law of Effect.
c. Drive reduction theory.

d. Response deprivation.

A

c. Drive reduction theory.

23
Q
  1. Rats kept in the dark will press a lever to turn on a light. This is evidence against which of the following theories of reinforcement?
    p. 139

a. Differential probability.
b. Drive reduction theory.
c. Response deprivation hypothesis.

d. Premack principle.

A

b. Drive reduction theory.

24
Q
  1. According to the drive reduction theory of reinforcement, food is an effective reinforcer because
    p. 139
    a. an organism will perform an instrumental response to obtain food. 

    b. it reduces the hunger drive.
    c. an association between the food and hunger is learned with repeated training.
    
d. None of the above are correct.
A

b. it reduces the hunger drive.

25
Q
  1. The idea that water is an effective reinforcer because it reduces thirst is consistent with which of the following theories of reinforcement?
    p. 139
    a. The Premack principle.
    b. The Law of Effect.
    c. Drive reduction theory.
    
d. Response deprivation.
A

c. Drive reduction theory.

26
Q
  1. Which of the following theorists claimed that what makes a stimulus reinforcing is its effectiveness in reducing a drive state?
    p. 139
    a. Timberlake and Allison.
b. Thorndike.
    c. Hull.
d. Premack.
A

c. Hull

27
Q
  1. A stimulus that is effective in reducing a biological need without prior training is called a
    p. 140
    a. sensory reinforcer.
b. natural conditioner.
c. primary reinforcer.
d. secondary reinforcer.
A

c. primary reinforcer.

28
Q
  1. Which of the following responses cannot be explained by Hull’s drive reduction theory?
    p. 140-141
    a. A pigeon will peck a key to obtain food.
    
b. A chimpanzee will pull a chain to turn on the room lights.
    
c. A boy will deposit money into a vending machine to obtain a soda.
    
d. A male rat will press a lever to gain access to a sexually receptive female.
A

b. A chimpanzee will pull a chain to turn on the room lights.

29
Q
  1. Which of the following is an example of sensory reinforcement? 
p. 140-141
    a. A rat presses a lever to obtain food.
b. A boy inserts money into a candy machine to obtain a tasty treat.
c. A rat in a cold environment presses a lever to increase the air temperature. 
d. a girl winds the key on a music box to produce music.
A

d. a girl winds the key on a music box to produce music.

30
Q
  1. In an instrumental conditioning procedure, a hungry rat is trained to press a lever to obtain food pellets. According to Premack, the reinforcer is the
    p. 141

a. lever press.

b. rat.
c. food.

d. act of eating the food.

A

d. act of eating the food.

31
Q
  1. According to the Premack principle, p. 141

a. lever pressing in a Skinner box is reinforcing because it reduces the hunger, drive.
b. any response can serve as a reinforcer.

c. a less likely response can serve as a reinforcer for a more likely response.

d. a more likely response can serve as a reinforcer for a less likely response.

A

d. a more likely response can serve as a reinforcer for a less likely response.

31
Q
  1. Which of the following statements is NOT correct? 
p. 141-144

a. The Premack principle and the response deprivation hypothesis both predict that a high-probability response can be made into a reinforcing event.
b. The Premack principle states that only a high-probability response can be made into a reinforcing event.
c. The response deprivation hypothesis states that only a high-probability -response can be made into a reinforcing event.
d. The Premack principle and the response deprivation hypothesis both predict that a low-probability response can be made into a reinforcing event.

A

c. The response deprivation hypothesis states that only a high-probability -response can be made into a reinforcing event.

32
Q
  1. The notion that the opportunity to perform a higher-probability response can serve as a reinforcer for a lower-probability response is consistent with which of the following theories of reinforcement?
    p. 141

a. The Premack principle.

b. The Law of Effect.
c. Drive reduction theory.

d. Response deprivation.

A

a. The Premack principle.

34
Q
  1. According to the response deprivation hypothesis of reinforcement, water is an effective reinforcer because
    p. 143-144

a. it reduces thirst.
b. the instrumental conditioning procedure places a restriction on drinking.
c. the instrumental conditioning procedure places a restriction on performing the instrumental response.
d. the schedule line does not pass through the behavioral bliss point.

A

b. the instrumental conditioning procedure places a restriction on drinking.

35
Q
  1. Which of the following theorists suggested that the critical difference between the instrumental response and the reinforcer response is that in instrumental conditioning the subject is free to perform the instrumental response but is restricted in performing the reinforcer response?
    p. 144

a. Timberlake and Allison.
b. Skinner and Thorndike.
c. Hull.
d. Premack.

A

a. Timberlake and Allison.

36
Q
  1. Which of the following approaches to reinforcement suggests that under certain conditions, the opportunity to perform a low-probability response can be used to reinforce a higher-probability behavior?
    p. 144

a. Differential probability.

b. Drive reduction theory. 

c. Response deprivation hypothesis.

d. Premack principle.

A

c. Response deprivation hypothesis.

37
Q
  1. In the behavioral regulation approach, the behavioral bliss point is best defined as 
p. 146

a. the point at which a stimulus produces a feeling of euphoria.
b. the point at which a response loses its reinforcing properties.

c. a subject’s favorite response once an instrumental conditioning. procedure is imposed.
d. an organism’s preferred distribution of activities before an instrumental conditioning procedure is imposed.

A

d. an organism’s preferred distribution of activities before an instrumental conditioning procedure is imposed.

38
Q
  1. In the behavior regulation approach to reinforcement, what exactly is the challenge to the behavioral bliss point that is defended?
    p. 146-147

a. The schedule line.

b. The instrumental contingency.

c. The instrumental response.
d. Physiological homeostasis.

A

b. The instrumental contingency.

39
Q
  1. In the behavior regulation approach, when an instrumental contingency is imposed,
    p. 147-148

a. the schedule line will not go through the behavioral bliss point.

b. the schedule line will pass through the behavioral bliss point.
c. the schedule line will always be oriented at a 45 degree angle.

d. None of the above are correct

A

a. the schedule line will not go through the behavioral bliss point

40
Q
  1. The term `schedule line’ is associated with which approach to instrumental conditioning?
    p. 146-148

a. The drive reduction theory.

b. The response deprivation hypothesis.

c. The Premack principle.
d. The behavioral regulation approach.

A

d. The behavioral regulation approach.