Ch 10 Planning & Evaluating ABA Research Flashcards
(18 cards)
when the data appear to show an effect, but the effect is actually due to chance, confounding variables, or error, not the independent variable. it’s a false positive.
e.g. a BCBA implements a token system, and the client’s behavior improves. However, the change was actually due to a new medication the client started—not the token system.
→ Concluding the token system was effective = ______
type I error
when a behavior analyst fails to detect a real effect of an intervention—they conclude that no functional relationship exists between the independent variable and the behavior when one actually does exist. it’s a false negative.
e.g. A BCBA introduces a visual schedule to reduce transitions tantrums, but due to inconsistent implementation or inadequate data, they conclude it didn’t work—even though, when used consistently, the schedule would have been effective.
→ Dismissing the intervention = ______
type II error
combination of two or more interventions or procedures that are implemented together to change behavior.
e.g. 1. Token Economy + Praise + Response Cost
2. Visual Schedule + Prompting + Reinforcement
3. Behavior Contract + Self-Monitoring + Reinforcement
treatment package
degree to which an intervention is implemented correctly/as planned. ensures the IV (treatment or procedure) is delivered consistently and accurately.
-also called procedural fidelity, implementation fidelity, or intervention integrity
treatment integrity
(intentional or unintentional) gradual deviation from the original treatment protocol during the course of implementation.
e.g. A therapist is supposed to provide a token immediately after every correct response. Over time, they start delaying token delivery or forgetting to provide tokens after some responses.
→ The procedure has moved away from the original plan.
treatment drift
method of repeating an experiment while intentionally varying certain aspects (like subjects, settings, or procedures) to test the generality and robustness of the original findings.
-helps establish external validity—whether the effects of an intervention hold true under different conditions.
Examples of What Might Change:
• Different participants (e.g., a new client)
• Different settings (e.g., classroom instead of clinic)
• Different implementers (e.g., teacher instead of RBT)
• Modified procedures (e.g., using a tablet instead of flashcards)
• Varying schedules or materials
e.g. Original study: DTT improves receptive language in a 5-year-old with ASD in a clinic
• ______: DTT is used with a 7-year-old in a school setting and produces similar results
→ This supports the generalization of the intervention across subjects and settings
systematic replication
extent to which ABA goals, methods, and results are considered appropriate, important, and beneficial by the people affected by the intervention.
Three Main Aspects (Wolf, 1978):
1. Social Significance of the Goals
• Are we targeting behaviors that matter to the client and stakeholders?
• Example: Teaching communication over teaching matching colors, if communication is the current need
2. Social Appropriateness of the Procedures
• Are the intervention methods acceptable and respectful to those involved?
• Example: Using reinforcement instead of intrusive punishment procedures
3. Social Importance of the Effects
• Do the outcomes meaningfully improve the person’s quality of life?
• Example: Does the skill increase independence or improve social relationships?
social validity
repetition of an experimental condition to verify the consistency and reliability of behavior change and to strengthen the validity of findings.
replication
2 types
1. direct replication
2. systematic replication
extent to which an intervention or procedure is implemented exactly as planned, according to the defined protocol.
-also known as treatment integrity or implementation fidelity.
-ensures internal validity
procedural fidelity
type of control condition used primarily in medical and psychological research, where a fake or inactive treatment is given to control for the effects of expectation or belief in treatment. In ABA, placebo control is rarely used because behavior analysis relies on observable, measurable, environmental manipulations, not subjective treatments.
purpose-
-to rule out the effects of expectancy, attention, or novelty
-to determine if the treatment effect is genuine or due to nonspecific factors
placebo control
involves removing components from a treatment package one at a time to see if behavior change is maintained. If the behavior worsens when a component is removed, that component is likely necessary.
Purpose:
• Identify the active components of a treatment
• Simplify the intervention by removing unnecessary elements
• Improve efficiency, acceptability, and social validity
• Increase treatment integrity by reducing complexity
drop-out component analysis
experimental setup in which both the subject and the experimenter are “blind” to the condition (treatment or control) being applied, to prevent expectation effects or measurement bias.
Purpose:
• Reduce observer bias
• Minimize placebo effects
• Increase experimental control
• Ensure that results are due to the independent variable, not expectations
double-blind control
exact repetition of a previously conducted experiment or intervention under the same conditions—same procedures, participants (or similar), setting, and measurement systems. It is used to assess the reliability and consistency of the original findings.
-establishes reliability of the effect
direct replication
experimental strategy used to determine which individual parts of a treatment package are necessary or sufficient for producing behavior change. It helps behavior analysts refine interventions to be more efficient, effective, and ethical.
component analysis
2 types
1. drop-out component analysis
2. add-in component analysis
method used to identify which components of a treatment package are sufficient to produce behavior change by starting with no intervention (or a minimal baseline condition) and then adding components one at a time.
add-in component analysis
direct replication vs systematic replication
- Direct Replication
• Repeating the exact same procedures with the same or similar conditions
• Used to test reliability of findings
• Example: Same intervention, same setting, same participant type - Systematic Replication
• Repeating the study with planned variations (e.g., new client, new setting, slightly different procedure)
• Used to test the generality of the findings
• Example: Same intervention used with a different learner in a different classroom
direct replication vs systematic replication
- Direct Replication
• Repeating the exact same procedures with the same or similar conditions
• Used to test reliability of findings
• Example: Same intervention, same setting, same participant type - Systematic Replication
• Repeating the study with planned variations (e.g., new client, new setting, slightly different procedure)
• Used to test the generality of the findings
• Example: Same intervention used with a different learner in a different classroom
drop-out vs add-in component analysis
- Drop-Out Component Analysis
• Begin with the full treatment package
• Systematically remove components one at a time
• See if behavior change maintains or declines
• Tests for necessity (A component is necessary if removing it causes the behavior to stop improving or regress.)- Add-In Component Analysis
• Begin with no treatment or a basic version
• Gradually add components
• Observe when behavior change starts to occur
• Tests for sufficiency (A component is sufficient if adding it alone (without the other components) results in behavior change.)
- Add-In Component Analysis