Flashcards in C6 Deck (23):
Concurrent-chain Reinforcement Schedule
A complex reinforcement procedure in which the participant is permitted to choose during the first link which of several simple reinforcement schedules will be in effect in the second link.
Once a choice has been made, the rejected alternatives become unavailable until the start of the next trial. Concurrent-chain schedules allow for the study of choice with commitment.
A complex reinforcement procedure in which the participant can choose any one of two or more simple reinforcement schedules that are available simultaneously. Concurrent schedules allow for the measurement of direct choice between simple schedule alternatives.
Continuous Reinforcement [CRF]
A schedule of reinforcement in which every occurrence of the instrumental response produces the reinforcer.
A graphical representation of how a response is repeated over time, with the passage of time represented by the horizontal distance (or x axis), and the total or cumulative number of responses that have occurred up to a particular point in time represented by the vertical distance (or y axis).
Decrease in the value of a reinforcer as a function of how long one has to wait to obtain it.
The gradually increasing rate of responding that occurs between successive reinforcements on a fixed-interval schedule.
Fixed Interval Schedule [FI]
A reinforcement schedule in which the reinforcer is delivered for the first response that occurs after a fixed amount of time following the last reinforcer or the beginning of the trial.
Fixed Ratio Schedule [FR]
A reinforcement schedule in which a fixed number of responses must occur in order for the next response to be reinforced.
A schedule of reinforcement in which only some of the occurrences of the instrumental response are reinforced. The instrumental response is reinforced occasionally, or intermittently. Also called partial reinforcement.
The interval between one response and the next. IRTs can be differentially reinforced in the same fashion as other aspects of behavior, such as response force or response variability.
A reinforcement schedule in which a certain amount of time is required to set up the reinforcer. A response is reinforced only if it occurs after the reinforcer has been set up.
A restriction on how long a reinforcer remains available. In order for a response to be reinforced, it must occur before the end of the limited-hold period.
A rule for instrumental behavior, proposed by R. J. Herrnstein, which states that the relative rate of responding on a particular response alternative equals the relative rate of reinforcement for that response alternative.
A mechanism for achieving matching by responding so as to improve the local rates of reinforcement for response alternatives.
the same as Intermittent reinforcement
Post Reinforcement Pause
A pause in responding that typically occurs after the delivery of the reinforcer on FR and FI schedules of reinforcement.
The high and invariant rate of responding observed after the post-reinforcement pause on FR schedules. The ratio run ends when the ratio requirement has been completed and the participant is reinforced.
A schedule in which reinforcement depends only on the number of responses the participant performs, irrespective of when those responses occur.
Disruption of responding that occurs on ratio schedules when the response requirement is increased too rapidly.
Schedule of reinforcement
A program, or rule, that determines how and when the occurrence of a response will be followed by the delivery of the reinforcer.
Less sensitivity to the relative rate of reinforcement than predicted by the matching law.
Variable Interval Schedule
A reinforcement schedule in which reinforcement is provided for the first response that occurs after a variable amount of time from the last reinforcer or the start of the trial.