Unit 3 (chap 9-11) Flashcards

1
Q

Mentalistic explanation of behavior

A

Assumptions about the existence of an inner and mental dimension as the cause of behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Motivating operation (MO)

A

an environmental and/or biological even that (1) temporarily alters the value of a specific reinforcer and (2) increases/decreases the probability of behaviors yielding that reinforcer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Establishing operation (EO)

A

is an environmental and/or biological event that (1) temporarily increases the value of a specific reinforcer and (2) increases the probability of behaviors yielding that reinforcer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Abolishing operation (AO)

A

an environmental and/or biological event that (1) temporarily decreases the value of a specific reinforcer and (2) decreases the probability of behaviors yielding that reinforcer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Reinforcer survey

A

a structured interview or written survey that asks the individual to identify highly preferred activities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Stimulus preference assessment

A

rank ordered list of preferred stimuli is obtained by observing choices between those stimuli.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Preference hierarchy

A

List of stimuli rank ordered from most to least preferred.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Quality refers to the..?

A

subjective value of a reinforcer, which can vary from one individual to the next.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Punishment

A

is the process or procedure whereby a punisher decreases the future probability of an operant behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Habits are formed when

A

an operant response has been repeatedly reinforced, hundreds, if not thousands of times in the presence of the same antecedent stimulus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Punisher

A

contingent consequence that decreases the future probability below its pre punishment level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Positive punishment

A

the contingent presentation of a consequence that decreases the future probability of the behavior below its no punishment level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Negative punishment

A

remove to reduce.
the contingent removal, reduction, or presentation of a reinforcer; the effect of which decreases the future probability of the behavior below its no punishment level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Primary punisher

A

contigent consequence that functions as a punisher because, in the evolutionary past of the species this consequence decreased the chances of survival.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Conditioned punisher

A

a contingent consequence that signals a delay reduction to a backup punisher.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Time out from a positive reinforcement

A

a signaled response contingent suspension of a positive reinforcement contingency the effect of which decreases the future problem behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

4 guidelines for effectively using time out from positive reinforcers

A
  1. Provide no more than one verbal warning
  2. significantly reduce access to reinforcer
  3. end after no more than 5 minutes.
  4. every instance of the problem behavior produces a time out.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

response cost punishers

A

negative punishers that involve the removal or reduction of a reinforcer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

when punishing agents are watching, behavior is _________ to occur.
A. more likely
B. less likely

A

B. Less likely.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

_______________ punishers signal a delay reduction to a backup punisher.

A

conditioned

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Continuous reinforcement

A

every instance of the response is reinforced.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

intermittent reinforcement

A

the response is sometimes but not always reinforced.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Schedule of reinforcement

A

precisely specifies the nature of the contingent relation between a response and its reinforcer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Ratio schedule of reinforcement

A

specifies the number of responses that must be made in order for the reinforcer to be delivered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Fixed ratio (FR) schedule

A

the number of responses required per reinforcer is the same every time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Cumulative record

A

a graphical display of responding as it unfolds over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Typical pattern of responding under an FR schedule of reinforcement

A

a post reinforcement pause followed by a high constant rate of responding that ends with a reinforcer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Variable ratio (VR) schedule

A

the number of responses required per reinforcer is not the same every time
- Ex, gambling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Typical pattern of responding under a VR schedule

A

high rate of responding with little or no post reinforcement pause.

28
Q

interval schedule of reinforcemnt

A

specifies the amount of time that must elapse before a single response will produce the reinforcer.

29
Q

a fixed interval (FI) schedule

A

specifies a constant time interval that must elapse before a single response will produce the reinforcer.

30
Q

Typical FI response pattern of nonhuman subjects

A

a post reinforcement pause gives way to an accelerating response rate that terminates with a reinforcer.

31
Q

Variable interval (VI) schedule

A

the amount of time that must elapse before the first response is reinforced is not the same every time.

32
Q

Typical pattern of responding under a VI schedule

A

a steady, moderate response rate with little to no post reinforcement pause.

33
Q

Piper chapman is asked by her cell mates to sit by the door, wait for a cockroach to wander by and grab it. If the sight of a cockroach is the reinforcer what is the schedule of reinforcement?

A

Variable interval

34
Q

Scott was speeding and got into an accident and his car was totaled and had to pay a large fine; it has been 10 years since that incident but Scott never drives above the speed limit again. The accident and fine functioned as a?

A

Punisher

35
Q

Which schedule of reinforcement produces a post reinforcement pause, that resembles procrastination. What schedule of reinforcement produces this characteristic “break and run” pattern of responding?

A

fixed ratio

36
Q

Austin can never tell when his friend will be home, when he wants to visit his friend he drives past his house over and over again. As soon as austin sees that the friend’s car is parked out front, he can go in. If the behavior of interest is driving past the friends house, and the reinforcer is seeing the car what schedule of reinforcement is this?

A

variable interval

37
Q

What type of graphical display shows responding as it unfolds over time?

A

Cumulative record

38
Q

What principle of behavior analysis states that access to a high-probability behavior will always function as a reinforcer when made contingent on emitting a low-probability behavior?

A

Premack principle

39
Q

Abby runs for 3 hours a week, and she paints for 12 hours a week. According to the Premack principle, abby will run more hours per week if we arrange the contingency

A

If she runs for an hour –> then she can paint for an hour.

40
Q

AO vs EO

A
  • Abolishing operation
  • establishing operation

Abolishing temporarily decrease the value of the reinforcer and thus decreases the behavior

Establishing temporarily increases the value of the reinforcer and thus increases the behavior

41
Q

Example of an AO

A

three cups of coffee in the morning, motivation to go get a cup of coffee after class is low. Because you had so much coffee before you no longer want to engage in the behavior.

42
Q

Four reinforcer dimensions

A

CISQ
- contingency
- immediacy
- size
- quality

43
Q

six principles of effective punishment

A

“RIGPEC”
- Reinforcement first
- punishment with extinction
- contingently
- Immediate delivery of punishment
- every time it occurs
- Goldilocks zone

44
Q

conditioned/primary punishment

A
  • Primary punishment: a contingent consequence that functions as a punisher because it decreases the chance of survival.
  • Conditioned punishment: stimulus that signals a delay reduction to a primary function, and signals a delay to a backup punisher.
45
Q

Habituation (punishment)

A
  • Reduced response due to repeated exposure.
  • If you are exposed to it too much, you will not respond to it.
46
Q

Reinforcement

A

Behaviors are strengthened or weakened by their consequences. Reinforcement involves the delivery of a stimulus following a behavior that increases the likelihood of that behavior occurring again in the future.

47
Q

extinction

A

gradual weakening and eventual disappearance of a previously learned behavior when the reinforcement or reward that was previously associated with that behavior is no longer provided

48
Q

Differential reinforcement

A

used to selectively reinforce certain behaviors while extinguishing others.

49
Q

Response consequence contingency

A

describes the causal (IF–>THEN) relation between an operant behavior and its consequence.

50
Q

Positive punishment

A

add to decrease
adding an aversive stimulus to decreases the likelihood of a behavior occurring again in the future.

51
Q

Negative punishment

A

remove to decrease
Removing a desirable stimulus or outcome immediately following a behavior to decrease the likelihood of that behavior occurring again.

52
Q

Negative reinforcement

A

Removal of an aversive stimulus to increase the likelihood of the behavior occurring again.

53
Q

Positive reinforcement

A

the addition or presentation of a desirable stimulus immediately following a behavior which increases the likelihood of that behavior occurring again in the future.

54
Q

Schedule thinning

A

a procedure for gradually reducing the rate of reinforcement, while maintaining the desired behavior.

55
Q

Example of a fixed interval schedule

A

A weekly paycheck. The employee receives reinforcement every seven days, which may result in a higher response rate as payday approaches.

56
Q

How do variable ratio and fixed ratio schedules differ?

A

One is based on a fixed number of responses and one is based on an average.

57
Q

Fixed ratio schedule

A
  • Reinforcement is delivered contingent on responses.
  • The number of responses that are required does not change until the schedule does.
58
Q

Example of a fixed ratio

A

each time the client labels correctly you give them a token.

59
Q

Variable Ratio schedule

A
  • Reinforcement is delivered contingent on responses.
  • The number of responses required is based on an average
60
Q

Example of variable ratio

A

A VR3 might deliver reinforcement the 2nd response, the 4th response, and the 3rd response.

61
Q

Fixed interval schedule

A
  • reinforcement is delivered contingent on the first response after a certain amount of time.
  • The amount of time required never changes.
62
Q

Example of a fixed interval

A

You tell a client they have to sit in their chair quietly for 4 minutes.

63
Q

Variable interval schedule

A
  • Reinforcement is delivered contingent on the first response after a certain amount of time.
  • The amount of time required is based on an average.
64
Q

Example of a variable interval

A

Reinforcement delivered after 2 minutes then 3 minutes then 2 minutes

65
Q

Continuous reinforcement

A
  • Every response is reinforced
  • FR1
  • Fixed ratio 1
66
Q

Intermittent reinforcement

A
  • All other types of reinforcement schedules
  • Can be any other schedule: FR2, VR10, FI2,VI2
67
Q

Define ratio

A

refers to the number of responses needed to gain access to reinforcement.
- If a learner needs to have 5 responses before coming into contact with reinforcement, the practitioner will wait for all 5 responses before delivering reinforcement.

68
Q

Define Interval

A

Refers to the amount of time that needs to go by before reinforcement is available again.

69
Q

Define fixed

A

Refers to a set number.
- Whether that be in a ratio schedule or interval schedule.

70
Q

Define variable

A

refers to the average number.