Chapter 8 & 9 Study Guide Flashcards
- The vertical displacement on a cumulative record represents p. 121
a. the cumulative number of responses.
b. the number of subjects responding.
c. the passage of time.
d. the schedule of reinforcement.
a. the cumulative number of responses.
- The horizontal displacement on a cumulative record represents p. 121
a. the cumulative number of responses.
b. the number of subjects responding.
c. the passage of time.
d. the schedule of reinforcement.
c. the passage of time.
If a rat responded at a high rate, then stopped responding for a period of time, and then responded at a low rate, the cumulative record would have the following
a. A horizontal line followed by a shallow slope followed by a steep slope.
b. A shallow slope followed by a horizontal line followed by a steep slope.
c. A steep slope followed by a horizontal line followed by a shallow slope.
d. A steep slope followed by a shallow slope followed by a horizontal line.
c. A steep slope followed by a horizontal line followed by a shallow slope.
P. 121
- If a pigeon receives reinforcement for every 5 key pecks, the reinforcement schedule in effect is a
p. 122
a. variable ratio.
b. fixed ratio.
c. fixed interval.
d. variable interval.
b. fixed ratio.
- If reinforcement of a particular response depends on how much time has passed since the last reinforcer, the procedure is called
p. 123
a. a ratio schedule.
b. an interval schedule.
c. a ratio run.
d. a timed schedule.
b. an interval schedule.
- A fixed ratio schedule of reinforcement typically produces
p. 122
a. an unpredictable rate of responding. b. a steady rate of responding with no predictable pauses. c. alternations between high response rates and postreinforcement pauses. d. None of the above are correct
c. alternations between high response rates and postreinforcement pauses
- Which of the following schedules of reinforcement would you expect to produce the longest ratio run?
p. 122-123
a. FR 2.
b. FR 10.
c. VR 7.
d. VR 20.
b. FR 10.
- A variable ratio schedule of reinforcement typically produces
p. 123
a. an unpredictable rate of responding.
b. a steady high rate of responding with no predictable pauses.
c. alternations between high response rates and post-reinforcement pauses.
d. None of the above are correct
b. a steady high rate of responding with no predictable pauses.
- A fixed interval schedule of reinforcement typically produces p. 124-125
a. an unpredictable rate of responding.
b. a steady rate of responding with no predictable pauses.
c. alternations between high response rates and post-reinforcement pauses.
d. None of the above are correct
c. alternations between high response rates and post-reinforcement pauses.
- A variable interval schedule of reinforcement typically produces
p. 125
a. an unpredictable rate of responding.
b. a steady low rate of responding with no predictable pauses.
c. alternations between high response rates and post-reinforcement pauses.
d. None of the above are correct
b. a steady low rate of responding with no predictable pauses.
- Which of the following schedules of reinforcement will produce the longest postreinforcement pause?
p. 122-125
a. VR 10.
b. VR 50.
c. FR 10.
d. FR 40.
d. FR 40.
- Which of the following schedules of reinforcement will produce a `scalloped’ cumulative record?
p. 124-125
a. FI 5.
b. FR 1.
c. VR 5.
d. VI 5.
a. FI 5.
- Which of the following statements regarding heterogeneous and homogeneous chains is correct?
p. 128-130
a. A homogeneous chain is a chained schedule, but a heterogeneous chain is not really a chained schedule because it does not involve a sequence of responses.
b. A heterogeneous chain is a chained schedule, but a homogeneous chain is not really a chained schedule because it does not involve a sequence of responses.
c. In a homogeneous chain, the various response components all involve the same response.
d. In a heterogeneous chain, the various response components all involve the same response.
c. In a homogeneous chain, the various response components all involve the same response.
- In order to get a food pellet, a hungry rat must first pull a chain once and then press a lever 10 times. This is an example of
p. 129
a. a heterogeneous chained schedule.
b. a homogenous chained schedule.
c. a concurrent schedule of reinforcement.
d. a randomly ordered schedule of reinforcement.
a. a heterogeneous chained schedule.
- The technique used in the establishment of response chains where the last response or response component of the chain is taught first is called
p. 131
a. a forward chaining procedure.
b. a backward chaining procedure.
c. a concurrent chaining procedure.
d. a feedback chaining procedure.
b. a backward chaining procedure
- The technique used in the establishment of response chains where the first response or response component of the chain is taught first is called
p. 131
a. a forward chaining procedure.
b. a backward chaining procedure.
c. a concurrent chaining procedure.
d. a feedback chaining procedure.
a. a forward chaining procedure.
- Experiments have demonstrated that p. 131
a. response chains cannot be learned using a backward chaining procedure.
b. response chains cannot be learned using a forward chaining procedure
c. response chains can be learned using either a forward chaining procedure or a backward chaining procedure.
d. the establishment of response chains is difficult when either a forward chaining or a backward chaining procedure is used.
c. response chains can be learned using either a forward chaining procedure or a backward chaining procedure.
- Which of the following schedules of reinforcement is best suited for the investigation of choice behavior?
p. 132
a. Homogeneous. b. Heterogeneous. c. Chained.
d. Concurrent.
d. Concurrent
- The relative rate of responding on a response alternative is equal to the relative rate of reinforcement obtained with that response alternative. This is called the
p. 133
a. Premack principle.
b. Law of Effect.
c. matching law.
d. post-reinforcement rule.
c. matching law
- In his `Law of Effect’, Thorndike proposed that
p. 137-138
a. the stimulus produces an association between the instrumental response and the reinforcer.
b. the reinforcer produces an S-R association.
c. instrumental conditioning is the result of the formation of an S-S association.
d. a stimulus does not become associated with the instrumental response.
b. the reinforcer produces an S-R association.
- Who is best associated with drive reduction theory?
p. 139
a. Thorndike.
b. Skinner.
c. Premack.
d. Hull.
d. Hull
- Which of the following theories of reinforcement makes use of the concept of homeostasis?
p. 139
a. The Premack principle.
b. The Law of Effect.
c. Drive reduction theory.
d. Response deprivation.
c. Drive reduction theory.
- Rats kept in the dark will press a lever to turn on a light. This is evidence against which of the following theories of reinforcement?
p. 139
a. Differential probability.
b. Drive reduction theory.
c. Response deprivation hypothesis.
d. Premack principle.
b. Drive reduction theory.
- According to the drive reduction theory of reinforcement, food is an effective reinforcer because
p. 139
a. an organism will perform an instrumental response to obtain food.
b. it reduces the hunger drive.
c. an association between the food and hunger is learned with repeated training.
d. None of the above are correct.
b. it reduces the hunger drive.