ABA 515 Cumulative Exam Flashcards

(195 cards)

1
Q

What are probes in ABA research?

A

Intermittent assessments used to determine if behavior change has generalized or maintained.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are two meanings of the term ‘probe’?

A
  1. A method to assess performance
  2. A design feature in a multiple-probe design
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Give three reasons to use a multiple-probe design.

A
  1. Behavior needs structured trials
  2. Repeated measures could harm learning
  3. Still meets design criteria when embedded in a multiple baseline
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How is randomization used in alternating treatment designs?

A

Conditions are presented in a random order to reduce bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How is randomization used in multiple baseline designs?

A

Start times or treatment orders are randomized.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a combined design?

A

A research design that blends features of two or more single-case designs to strengthen conclusions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a mini-reversal?

A

A brief return to baseline or altered condition to verify intervention effects without ending the whole treatment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are transfer of training and response maintenance?

A

Transfer: Behavior generalizes to new settings
Maintenance: Behavior continues after intervention ends

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is statistical regression and why is it a problem in single-case design?

A

It’s when extreme scores naturally move toward the average, which can be mistaken for treatment effects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the guideline for phase duration in single-case research?

A

Phases should continue until clear, stable patterns are observed — not based on arbitrary durations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are two risks of shifting phases too early?

A

It can confound results and reduce internal validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are two criteria for deciding when to shift phases?

A
  1. Use explicit decision rules
  2. Base decisions on stability and trend, not just time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are practice effects?

A

Improvements due to repeated exposure to a task, not because of the intervention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is variability and why is it important?

A

Fluctuations in data — large variability makes it harder to detect treatment effects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

List four causes of variability in single-case designs.

A

Observer drift, environmental changes, inconsistent procedures, inherent behavior variability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the purpose of outcome research in ABA?

A

To determine whether interventions work under real-world conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a constructive strategy?

A

Adding a new component to an effective intervention to enhance outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is a parametric strategy?

A

Testing different levels (e.g., more or less) of an independent variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a comparative strategy?

A

Comparing two different interventions to see which is more effective.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are two defining features of single-case designs?

A

Repeated measurement and replication within the same participant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the difference between single-case and group design?

A

Single-case: Few participants, many observations
Group design: Many participants, few observations each

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

List the four strengths of single-case designs.

A

Ongoing feedback, individual focus, generality, evaluation of effectiveness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Why does Kazdin promote single-case research?

A

It allows flexibility, feedback, and evaluation of individual responses, especially in applied settings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are multiple-probe designs used for?

A

To evaluate behavior change without continuous baseline data, often used when frequent testing could affect behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is a combined design?
A research design that **blends features** of two or more single-case designs to strengthen conclusions.
26
What are the key components of an empirical single-subject design (SSED) study?
**Justification, experimental question, method details, single-subject design, data display, and conclusion**.
27
What is the purpose of identifying empirical research?
To ensure the study uses **scientific methods** and provides **valid and reliable data**.
28
List five common single-subject designs used in empirical research.
**ABAB**, **Multiple Baseline**, **Changing Criterion**, **Multiple Treatment**, **Multiple Probe**.
29
What makes an article empirical?
It includes **original data**, uses **systematic methods**, and tests **hypotheses** using **quantitative analysis**.
30
What is a literature review article?
An article that **summarizes past research** rather than collecting new data.
31
What is a position or opinion paper?
An article presenting an **arguable opinion** without experimental data.
32
What is a between-groups design?
A study comparing **different groups**, where each group receives a **different condition or treatment**.
33
How do you recognize a between-groups study from an abstract?
Look for phrases like **'randomly assigned'** or **comparing groups** with **separate treatments**.
34
What is one clue that an article is a literature review?
It references **many studies** and highlights **trends or summaries** rather than original data.
35
What is social validity in published research?
**Assessing whether the goals, procedures, and outcomes** are meaningful to those affected by the study.
36
What are the three elements of baseline logic?
**Prediction**, **Verification**, and **Replication**.
37
What is an ABAB design?
A **reversal design** that introduces and withdraws the intervention to demonstrate experimental control.
38
What is a multiple baseline design?
A design that **staggered the implementation** of the intervention across **subjects, settings, or behaviors**.
39
What is a changing criterion design?
A design where **behavior must meet a series of progressively changing performance criteria**.
40
What is a multiple treatment (alternating treatment) design?
A design that **compares two or more treatments** presented in **rapid alternation**.
41
What is a multiple probe design?
A variation of multiple baseline using **intermittent measurements** rather than continuous baseline data.
42
What is a mini-reversal?
A brief return to a previous condition to **verify a behavior change** during an intervention.
43
What is a combined design?
A design that incorporates elements of **two or more single-case designs** to strengthen validity.
44
Why are combined designs used?
To **increase confidence** in experimental control and rule out alternative explanations.
45
What is parametric analysis?
Testing **different intensities** or levels of the same independent variable.
46
What is a component analysis?
Testing **which parts** of a treatment package are effective by adding/removing components.
47
What is replication?
Repeating a condition or study to **verify the reliability** of behavior change.
48
What is generalization in ABA?
The **transfer of behavior change** across **people, settings, stimuli, or time**.
49
What is response maintenance?
The **continued performance** of a learned behavior **over time** after intervention ends.
50
What is stimulus generalization?
When behavior occurs in the presence of **different but similar stimuli**.
51
What is response generalization?
When **different responses** produce the **same outcome** or function.
52
What is setting/situation generalization?
When a behavior occurs in **settings other than the training setting**.
53
How can generalization be promoted?
By teaching with **multiple exemplars**, **varying SDs**, or using **natural reinforcers**.
54
What is programming common stimuli?
Introducing stimuli during training that will **exist in the natural environment**.
55
What is training loosely?
Varying **noncritical aspects** of teaching to promote flexibility and generalization.
56
What is mediating generalization?
Teaching a behavior that will **help maintain or transfer** other behaviors (e.g., self-instruction).
57
What is indiscriminable contingency?
A reinforcement schedule that makes it **unclear when reinforcement will occur**, promoting maintenance.
58
What is a generalization probe?
A brief test to determine if behavior change **transfers to new contexts or conditions**.
59
How is response maintenance assessed?
By conducting **follow-up observations** after the intervention has ended.
60
What is social validity in ABA?
The **acceptability and importance** of behavior change goals, procedures, and outcomes to clients and stakeholders.
61
What are the three areas of social validity?
1. **Goals** are socially important 2. **Procedures** are acceptable 3. **Outcomes** are meaningful
62
How is social validity typically measured?
Using **questionnaires, interviews**, or **rating scales** completed by clients, caregivers, or professionals.
63
Why is social validity important?
To ensure interventions are **acceptable, ethical**, and **produce meaningful improvements**.
64
What is an example of a socially valid goal?
Teaching a child with ASD to **request help** instead of engaging in challenging behavior.
65
What is treatment integrity?
The degree to which an intervention is **implemented as planned**.
66
How is treatment integrity assessed?
By using **checklists, fidelity forms**, or **direct observation** to verify correct implementation.
67
Why is treatment integrity critical?
Without it, we can’t tell if the **intervention itself** or **poor implementation** affected the results.
68
What is a potential problem with low treatment integrity?
**Incorrect conclusions** about whether the intervention works.
69
What does high treatment integrity help support?
**Internal validity**, by ensuring the independent variable was implemented correctly.
70
What is procedural fidelity?
Another term for **treatment integrity** — ensuring that **procedures match the protocol**.
71
How can practitioners improve treatment integrity?
**Staff training**, **modeling**, **feedback**, and **ongoing supervision**.
72
What is interobserver agreement (IOA)?
The **extent to which two or more observers** report the same observed values of a behavior.
73
Why is IOA important?
It shows that the behavior is **well-defined**, that observers are consistent, and that data are **trustworthy**.
74
What is the difference between accuracy and agreement?
**Accuracy** measures correctness compared to a true value, while **agreement** measures consistency between observers.
75
When should IOA be assessed?
**Throughout an investigation**, ideally in each phase and in **20–30% of sessions**.
76
What is the formula for frequency ratio IOA?
(**Smaller total / Larger total**) × 100
77
What is a potential problem with frequency ratio IOA?
It can be **misleading if both observers consistently undercount** or overcount.
78
What is the formula for point-by-point agreement IOA?
(**Agreements / (Agreements + Disagreements**) × 100
79
What’s a benefit of point-by-point agreement IOA?
It provides a **more accurate** picture of consistency, especially for **variable behavior**.
80
What is reactivity in IOA?
When observers **change how they record** behavior because they know they're being observed.
81
How can reactivity be minimized?
**Blind observations** or **unscheduled checks**.
82
What is observer drift?
A **change in how an observer records** behavior over time.
83
What is a way to reduce observer drift?
**Regular retraining** and **clear definitions** of behavior.
84
What is an example of artifact in data collection?
An observer **becomes more lenient or strict** over time without realizing it.
85
Why is plotting agreement useful?
It helps detect **patterns, inconsistencies**, and **bias** in observer recording.
86
What does IOA NOT guarantee?
**Accuracy** — just because two observers agree doesn't mean they're correct.
87
What is the purpose of measurement in ABA?
To apply **quantitative labels** to behaviors and evaluate **intervention effectiveness**, **behavior change**, and **treatment outcomes**.
88
What are the three steps in behavior measurement?
1. **Identify** the behavior 2. **Define** it in observable, measurable terms 3. **Select** a **data collection method**
89
What does empiricism mean in ABA?
It means relying on **objective observation** and **measurement** of behavior to guide decisions.
90
What is a threat to internal validity?
Any **confounding variable** that interferes with the ability to attribute behavior change to the **intervention**.
91
Define the internal validity threat of history.
**External events** that occur during the study and may influence the behavior being measured.
92
Define maturation as a threat to internal validity.
**Natural developmental changes** over time that affect behavior.
93
What is instrumentation as a threat?
**Changes in the measurement system** or observer behavior over time.
94
Define testing effects.
**Behavior changes** due to repeated exposure to a test, not the intervention.
95
What is statistical regression?
Extreme scores move **toward the mean**, unrelated to intervention.
96
Define treatment diffusion.
When **intervention effects** spread to non-target phases or participants.
97
What is external validity?
The extent to which **results generalize** to other **people, settings, or behaviors**.
98
Define generality across subjects.
Whether results apply to individuals with **different characteristics**.
99
In ABA, the subject serves as their own...?
**Control**.
100
What are the seven essential components of an ABA experiment?
**Research question**, **participant**, **behavior**, **setting**, **measurement system**, **intervention**, **experimental design**.
101
What are the four types of research questions in ABA?
**Demonstration**, **parametric**, **component**, and **comparative**.
102
What are the three fundamental dimensions of behavior?
**Repeatability**, **Temporal extent**, **Temporal locus**
103
Define count in ABA measurement.
A **simple tally** of how many times a behavior occurs.
104
What is rate?
The **frequency of behavior** per unit of time (e.g., responses per minute).
105
When is rate appropriate to use?
When behaviors are **free operant** (discrete, occur repeatedly, minimal displacement).
106
When is it inappropriate to use rate?
For **discrete trials** or **long-duration behaviors**.
107
What is duration?
The **amount of time** a behavior occurs, from onset to offset.
108
When is duration an appropriate measure?
When behaviors occur **too long or too short** or **task-oriented**.
109
What is latency?
The time between an **SD (stimulus)** and the **start of the behavior**.
110
What is interresponse time (IRT)?
The time **between two responses**.
111
What are percentage measures used for?
**Response accuracy** or **opportunity-based performance**.
112
What is trials-to-criterion?
The number of **responses needed to reach a goal** or mastery level.
113
What is event recording?
Recording each **instance** of a behavior when it occurs.
114
Define whole-interval recording.
**Behavior must occur during the entire interval** to be counted.
115
Define partial-interval recording.
**Behavior occurs at any point** in the interval to be counted.
116
Define momentary time sampling.
**Behavior is recorded** only if it occurs **at the end** of the interval.
117
What is measurement by permanent product?
Measuring behavior **after it occurs** by evaluating **its effect on the environment**.
118
Front
Back
119
What is Internal Validity in ABA?
Confidence that the intervention (IV) caused the behavior change (DV), not something else.
120
What are threats to Internal Validity?
History, Maturation, Testing, Instrumentation, Attrition, Selection Bias, Procedural Drift.
121
What is the History threat to Internal Validity?
Outside events coincide with treatment and affect behavior.
122
What is the Maturation threat to Internal Validity?
Natural growth or change over time affects results.
123
What is the Testing threat to Internal Validity?
Repeated exposure to testing influences behavior.
124
What is the Instrumentation threat to Internal Validity?
Changes in measurement tools or observers distort data.
125
What is the Attrition threat to Internal Validity?
Loss of participants changes group characteristics.
126
What is the Selection Bias threat to Internal Validity?
Groups are not equivalent at baseline, affecting outcomes.
127
What is Procedural Drift in Internal Validity?
Inconsistent implementation of procedures over time.
128
What is External Validity in ABA?
Confidence that findings generalize across people, settings, and behaviors.
129
What are threats to External Validity?
Setting Specificity, Subject Specificity, Reactivity, Treatment Specificity.
130
What is the Setting Specificity threat to External Validity?
Results are limited to a particular location.
131
What is the Subject Specificity threat to External Validity?
Effects are seen only with specific individuals.
132
What is the Reactivity threat to External Validity?
Changes occur because participants know they are being observed.
133
What is the Treatment Specificity threat to External Validity?
Effects depend on exact conditions of the intervention.
134
What is Construct Validity in ABA?
Confidence that the intervention and outcomes accurately represent the concepts being studied.
135
What are threats to Construct Validity?
Poor Operational Definitions, Treatment Confounds, Expectancy Effects.
136
What is the Poor Operational Definitions threat to Construct Validity?
IVs or DVs are not clearly or accurately defined.
137
What is the Treatment Confounds threat to Construct Validity?
Uncontrolled variables contaminate the intervention.
138
What is the Expectancy Effects threat to Construct Validity?
Therapist or observer expectations influence behavior outcomes.
139
What is Social Validity in ABA?
Confidence that the goals, procedures, and outcomes are meaningful, acceptable, and useful to clients and society.
140
What are threats to Social Validity?
Poor Goal Selection, Unacceptable Procedures, Low Maintenance/Generalization.
141
What is the Poor Goal Selection threat to Social Validity?
Target behaviors are not socially significant.
142
What is the Unacceptable Procedures threat to Social Validity?
Procedures are viewed as inappropriate or harmful.
143
What is the Low Maintenance/Generalization threat to Social Validity?
Behavior changes do not persist or transfer across environments.
144
Front
Back
145
What is Internal Validity in ABA?
Confidence that the intervention (IV) caused the behavior change (DV), not something else.
146
What are threats to Internal Validity?
History, Maturation, Testing, Instrumentation, Attrition, Selection Bias, Procedural Drift.
147
What is the History threat to Internal Validity?
Outside events coincide with treatment and affect behavior.
148
What is the Maturation threat to Internal Validity?
Natural growth or change over time affects results.
149
What is the Testing threat to Internal Validity?
Repeated exposure to testing influences behavior.
150
What is the Instrumentation threat to Internal Validity?
Changes in measurement tools or observers distort data.
151
What is the Attrition threat to Internal Validity?
Loss of participants changes group characteristics.
152
What is the Selection Bias threat to Internal Validity?
Groups are not equivalent at baseline, affecting outcomes.
153
What is Procedural Drift in Internal Validity?
Inconsistent implementation of procedures over time.
154
What is External Validity in ABA?
Confidence that findings generalize across people, settings, and behaviors.
155
What are threats to External Validity?
Setting Specificity, Subject Specificity, Reactivity, Treatment Specificity.
156
What is the Setting Specificity threat to External Validity?
Results are limited to a particular location.
157
What is the Subject Specificity threat to External Validity?
Effects are seen only with specific individuals.
158
What is the Reactivity threat to External Validity?
Changes occur because participants know they are being observed.
159
What is the Treatment Specificity threat to External Validity?
Effects depend on exact conditions of the intervention.
160
What is Construct Validity in ABA?
Confidence that the intervention and outcomes accurately represent the concepts being studied.
161
What are threats to Construct Validity?
Poor Operational Definitions, Treatment Confounds, Expectancy Effects.
162
What is the Poor Operational Definitions threat to Construct Validity?
IVs or DVs are not clearly or accurately defined.
163
What is the Treatment Confounds threat to Construct Validity?
Uncontrolled variables contaminate the intervention.
164
What is the Expectancy Effects threat to Construct Validity?
Therapist or observer expectations influence behavior outcomes.
165
What is Social Validity in ABA?
Confidence that the goals, procedures, and outcomes are meaningful, acceptable, and useful to clients and society.
166
What are threats to Social Validity?
Poor Goal Selection, Unacceptable Procedures, Low Maintenance/Generalization.
167
What is the Poor Goal Selection threat to Social Validity?
Target behaviors are not socially significant.
168
What is the Unacceptable Procedures threat to Social Validity?
Procedures are viewed as inappropriate or harmful.
169
What is the Low Maintenance/Generalization threat to Social Validity?
Behavior changes do not persist or transfer across environments.
170
Front
Back
171
What is Internal Validity in ABA?
Confidence that the intervention (IV) caused the behavior change (DV), not something else.
172
What are threats to Internal Validity?
History, Maturation, Testing, Instrumentation, Attrition, Selection Bias, Procedural Drift.
173
What is the History threat to Internal Validity?
Outside events coincide with treatment and affect behavior.
174
What is the Maturation threat to Internal Validity?
Natural growth or change over time affects results.
175
What is the Testing threat to Internal Validity?
Repeated exposure to testing influences behavior.
176
What is the Instrumentation threat to Internal Validity?
Changes in measurement tools or observers distort data.
177
What is the Attrition threat to Internal Validity?
Loss of participants changes group characteristics.
178
What is the Selection Bias threat to Internal Validity?
Groups are not equivalent at baseline, affecting outcomes.
179
What is Procedural Drift in Internal Validity?
Inconsistent implementation of procedures over time.
180
What is External Validity in ABA?
Confidence that findings generalize across people, settings, and behaviors.
181
What are threats to External Validity?
Setting Specificity, Subject Specificity, Reactivity, Treatment Specificity.
182
What is the Setting Specificity threat to External Validity?
Results are limited to a particular location.
183
What is the Subject Specificity threat to External Validity?
Effects are seen only with specific individuals.
184
What is the Reactivity threat to External Validity?
Changes occur because participants know they are being observed.
185
What is the Treatment Specificity threat to External Validity?
Effects depend on exact conditions of the intervention.
186
What is Construct Validity in ABA?
Confidence that the intervention and outcomes accurately represent the concepts being studied.
187
What are threats to Construct Validity?
Poor Operational Definitions, Treatment Confounds, Expectancy Effects.
188
What is the Poor Operational Definitions threat to Construct Validity?
IVs or DVs are not clearly or accurately defined.
189
What is the Treatment Confounds threat to Construct Validity?
Uncontrolled variables contaminate the intervention.
190
What is the Expectancy Effects threat to Construct Validity?
Therapist or observer expectations influence behavior outcomes.
191
What is Social Validity in ABA?
Confidence that the goals, procedures, and outcomes are meaningful, acceptable, and useful to clients and society.
192
What are threats to Social Validity?
Poor Goal Selection, Unacceptable Procedures, Low Maintenance/Generalization.
193
What is the Poor Goal Selection threat to Social Validity?
Target behaviors are not socially significant.
194
What is the Unacceptable Procedures threat to Social Validity?
Procedures are viewed as inappropriate or harmful.
195
What is the Low Maintenance/Generalization threat to Social Validity?
Behavior changes do not persist or transfer across environments.