SOWO 940 Flashcards

1
Q

What are the steps in intervention research, and what research questions, activities, and designs are appropriate at each step?

A

The 5 step process is iterative and sequential.
1: Specify the problem and develop a program theory
2: Create and revise program materials
3: Refine program (using efficacy tests)
4: Test effectiveness (in different settings)
5: Disseminate findings and program materials

Step 1: Specify the problem and develop a program theory.

Develop PROBLEM theory of risk, promotive, and protective factors. Develop PROGRAM theory of malleable mediators.

Activities: (a) Search the literature to identify factors related to the problem and program theories, (b) Identify intervention level, setting, and agent(s), and (c) Using the information from activities a and b, develop the problem theory and program theory re: how the program will modify mediators, this leads to further specification in logic models and the theories of change.

Approp. research questions: Focus on the modifiable factors that contribute to the problem.

Step 2: Create and revise program materials.

Activities: Design and develop first draft of program materials and measures. Submit for expert review. Manual specifies essential elements and fidelity criteria, specifying the outcome measures to gauge success. Training and other implementation supports should be developed. Pilot testing of program and measures (i.e., outcome and fidelity measures) is appropriate here to test feasibility, acceptability of program structures. Expand content to address training and implementation.

Appropriate research questions: (focused on intervention) Can intervention agents deliver program content in the time allotted? Does the sequencing of content make sense to intervention agents and program participants? Are activities culturally congruent with the target population and setting? Do participants seem engaged?

Step 3: Refine program.

Design: Maintain a high degree of control over sites to produce fine-grained analysis.

Activity: (a) Conduct a series of efficacy tests to determine if intervention components have desired effects, estimate effect sizes, and test for moderation and mediation (note: studies must be adequately powered to test moderation/mediation), (b) Develop rules for adaptation based on moderation and mediation tests, community values and needs, and other issues, and (c) The manual is refined based on results and acceptable adaptations are developed and tested based on community needs and values.

Research Question: Do specific components of the intervention have intended effects and to what extent?

Step 4: Test effectiveness.

Activity: (a) Conduct effectiveness studies to test at large scale, many elements of routine practice, (b) Estimate effects under ITT, and (c) Estimate effects on efficacy subsets.

Research Question: Does this program have intended effects when implemented as it might be in routine practice?

Step 5: Disseminate program findings & materials.

Activities: Write up and publish study findings and program materials. Develop training materials and certification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

As per Fraser and Galinsky (2010), define the concepts of problem theory and theory of change. How is each related to one another? Develop both a problem theory and theory of change for an intervention within your research area and/or field of expertise.

A

Problem theory is focused on defining and understanding the problem, including its causes. The problem theory should be supported by the literature and based upon a specific problem of interest; it should identify which causal or contextual factors are modifiable.

Theory of change is a pragmatic framework that describes how and why an intervention affects change.

Example:

Problem theory: The social ecological framework provides a multilevel framework for organizing and contextualizing interactive characteristics of individuals, their relationships, communities, and social systems that often work together to increase risk for or protection from experiencing violence, such as commercial sexual exploitation.

Theory of change: Service providers use of a tool to screen for, assess risk of, and identify commercial sexual exploitation, and protocol to refer children to health-related treatment and legal services will help improve children’s health and safety.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You are considering conducting a study of a community-based, practitioner-developed intervention. What steps should you take before conducting a rigorous investigation to determine the intervention’s evaluability? (~6 steps)

A

Six steps to determine the interventions evaluability include:

  1. Require the program to specify a falsifiable logic model as part of its application for funding
  2. Fund a pilot with a corresponding formative evaluation
  3. If repetitions of the formative evaluation step is indicated, decide whether to fund it or abandon the program model
  4. Proceed to a process evaluation that verifies the satisfaction of the program’s own falsifiable logic model
  5. If repetition of the process evaluation is needed, decide whether to fund it or abandon the program model
  6. Proceed to a rigorous impact evaluation efficacy trial
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are key threats to internal validity, and how might you plan a study that would address such threats?

A

Internal validity refers to causal inference (i.e., treatment/intervention caused outcome).

Internal validity examines whether the study design, conduct, and analysis answer the research questions without bias. External validity examines whether the study findings can be generalized to other contexts.

Necessary conditions for causal inference: treatment precedes outcome in time, treatment covaries with outcome, and no other explanations are plausible. Random assignment eliminates selection bias and reduces several threats to internal validity.

Threats and responses to internal validity:

  1. Ambiguous temporal precedence (Timing & sequence)
    o Implement intervention before measuring the outcome. For observational studies, a longitudinal design can help establish precedence.
  2. Selection of participants (systematic differences between treatment/control groups that could explain the observed effect)
    o Random assignment
  3. History (outside events occurring concurrently w/ treatment that could explain the observed effect)
    o Control experimental setting (e.g., private space)
    o Identify and measure external events
    o Select groups from the same location/setting
    o Ensure experiment schedule is the same for all participants (e.g., consistent pre/post-tests)
  4. Maturation (natural changes over time in the treatment group)
    o Select a sample w/ same baseline covariates (e.g., same ages) and from the same location so maturation trends are similar
  5. Regression (Shift toward the mean. For participants w/ extreme scores could be interpreted as a treatment effect)
    o Use reliable measures (assuming regression toward the mean is due to measurement error) by increasing the number of items, averaging it over several timepoints, or using a multivariate function (e.g., make sure the measure does not change drastically after selecting the sample).
    o Ensure treatment/control groups are large enough so there is adequate variation in scores
    o Conduct diagnostic statistical tests to look for regression toward the mean
  6. Attrition (loss of participants/fail to collect data from participants could result in measuring effects that are not there, especially if missing data is correlated w/ treatment/control group conditions)
    o Incentivize participants
    o Ensure intervention is culturally and personally relevant, and acceptable among the population
    o Ensure data collection is easy and flexible (e.g., select brief and easy to understand measures; allow electronic data collection)
    o Statistical methods related to missing data methods like imputation and other adjustments
    o Intent-to-treat analysis helps, assuming that attrition is totally random
  7. Testing (exposure to a test can affect scores on subsequent exposures which could be interpreted as a treatment effect)
    o Use different tests/measures that measure the same construct
    o Measure a potential testing effect by only giving select units a pretest
  8. Instrumentation (The measure and treatment/control conditions may change over time or there can be differences in participants’ understanding of the measure which could be interpreted as a treatment effect)
  9. Additive or interactive effects of threats to internal validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

With particular consideration of internal validity, how are experimental and quasi-experimental designs different? In your research area, describe in detail two randomized and two quasi-experimental studies that have advanced knowledge in important ways.

A

• Experimental designs include random assignment to equivalent comparison groups, and quasi-experimental designs do not include random assignment and at times no comparison group
• Given that quasi-experimental designs do not include random assignment there is the possibility of selection bias which makes it more difficult to rule out other threats to internal validity, such as history, maturation, and regression. Quasi-experimental designs do not necessarily differ from randomized, experimental designs regarding test, instrumentation, and attrition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In the context of quasi-experimental intervention research, describe and explain study design elements that help enhance internal validity and confidence in any significant findings.

A

Four categories of design elements that can help enhance internal validity and confidence in quasi-experimental designs
1. Non-random assignment strategies:
a. Cut-off based assignment (basis for regression discontinuity designs): Participants are assigned to groups based on their score on a particular variable. Participants receiving below the cut-off score are assigned to one group, and participants above the cut-off are assigned to another group.
b. Waitlists: Compare participants receiving the intervention to eligible participants on the waitlist
c. Matching and stratification: Ensuring the groups are equivalent on multiple variables, prior to the intervention.

  1. Measurement (refers to design of when and how measurement occurs, not the measurement tool itself): In general, temporality of the post-test is key to causal inference. Different combinations of pre/post-tests can help rule out confounders and improve causal inference.
    a. Non-equivalent dependent variables (DV): Measure a non-target DV related to potential confounders in addition to the target DV, movement on the target DV but not the non-target DV rules out confounders as causes for change.
    b. Multiple substantive post-tests: Measuring multiple outcomes can help establish a plausible argument for the predicted and actual change.
    c. Pre-tests: Can be used to rule out selection bias by establishing that participants were not selected based on their extreme scores. Helps account for attrition b/c you know scores prior to participants’ drop-off.
    d. Repeated pre-tests: Can help reveal potential effects related to maturation, testing, instrumentation, and regression to the mean.
    e. If you cannot conduct pre-tests then you can (a) ask participants to recall their pre-test status; (b) conduct a pre-test with a proxy outcome variable; (c) use an independent pre-test sample, meaning conduct a pre-test with a random sample equivalent to participants but not enrolled in the study.
    f. Measure a moderator variable to rule out cofounders
    g. Try to measure threats to validity (confounders) and adjust for them.
  2. Comparison groups
    a. Select comparison groups based on factors deemed important for the outcome, and ensure the treatment/comparison groups are equivalent based on those factors
    b. Select multiple non-equivalent comparison groups
    c. Use cohort control groups (i.e., groups that move through an institution in cycles)–cohorts are assumed to be more comparable than to each other than non-equivalent groups
    d. Selecting “internal” comparison groups helps protect against confounders (e.g., selecting from w/in the same school/institution v. different schools/institutions)
  3. Treatment (timing and methods can help control for confounders)
    a. Switching replication: provide treatment to control group at a later time, or use multiple comparison groups and provide treatment to each of them
    b. Reversed treatment: provide treatment intended to produce the opposite effect as the experimental treatment
    c. Removed and repeated treatment: remove the treatment then repeat the treatment to show a pattern of response then non-response

General note: Establishing causal inference in non-randomized studies requires more data and assumptions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the elements of the narrative in an R21 proposal?

A
  1. Specific aims
  2. Research strategy which includes:
    • Significance
    • Innovation
    • Approach (i.e., study design, sample, timeline, measurements, and analytic plan)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In an R21 proposal, how are the elements integrated or related? I.e. what purpose does each element serve?

A

The specific aims drives the proposal and conveys the main purpose of the study. Therefore, subsequent sections should describe how the study design meets the specified aims. The significance section conveys the importance of the study (i.e., what critical gap is being filled). The innovation section conveys what is new or novel about the study. The approach section conveys the methods (in great detail) used to accomplish the specific aims.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are some key quasi-experimental designs?

A

Difference-in-differences : Multiple time-series design with comparison group. Need to establish equivalent slope before intervention, then non-equivalent slope after intervention.

Propensity score analysis : The propensity score is the probability of receiving treatment, given all relevant covariates. You can match, weight or stratify: matching = assign each treatment individual an equivalent comparison individual. Weight – calculate your statistics adjusting for differences in proposensity score by group. Stratify: match a range of propensity scores.

Regression discontinuity : You assign people to groups based on a cutoff score on an assignment variable that is measured prior to treatment. Those on one side of the cutoff are in the one group, those on the other are in the other group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is required for an experimental design?

A

For an experimental design, a study must have random assignment, at least one control and one treatment group, the manipulation of the independent (treatment) variable, and a comparison of baseline and post-test outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is required for a quasi-experimental design?

A

A quasi-experimental design requires a comparison between groups or trajectories, a manipulation of the independent (treatment) variable, and a baseline and post-test comparison of outcomes. However, units are not assigned to conditions randomly. Instead, the groups could be formed naturally (e.g., by level of need or interest) and attempts could be made to measure for potential confounding differences between groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is required for a pre-experimental design?

A

A pre-experimental design often has only one group with no manipulation of the independent treatment variable and may not have both baseline and follow-up data collection. Such a design could include a one-group pretest-posttest design where all sampled individuals receive the treatment and receive a pre- and post-test to compare scores before and after the intervention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is probability sampling?

A

A probability sampling technique would involve everyone in the population having a known probability of selection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is necessary to infer causation?

A

In order to infer causation, one must have (1) a statistical association between the independent/treatment variable and the dependent variable/outcome, (2) temporal precedence, and (3) nonspuriousness or the lack of potential confounding explanations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What goes into a manual?

A
  1. Program overview, description, rationale
  2. Conception of problem, including problem theory and theory of change
  3. Program goals
  4. Program theory, including an explanation of the format
  5. Example session content

(Carroll & Nuro, 2002)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Fundamental problem of causal inference

A
  • Think: no multiverse! (yet)
    • We can’t give you an intervention and not give you an intervention
17
Q

What is involved in an evaluability assessment

A
  • Pre-research to determine if something can be evaluated
  • GREAT TIME FOR CBPR
  • Need to do evaluability assessments to prove to a funder that the intervention is ready to be evaluated
    • Get you to the next stage of efficacy studies
  • Feasibility Studies: Whether an intervention should be recommended for testing—very similar [part of an evaluability assessment]
    • Not quite interchangeable but close
  • Acceptability is a great place to start—Do you like this intervention?
  • Demand—Are people interested and signing up for your intervention?
  • Implementation
  • Practicality
  • Adaptation
  • Integration
  • Expansion
  • Limited efficacy
18
Q

Rubin potential outcomes framework

A
  • method = statistical notation
  • strengths = concise, compatible with regression -based observational methods (PSM), translatable, specification of ignorability
  • weakness = SUTVA may be unrealistic, little to say about generalizability, no rules for choosing from covariates, no typology of threats
19
Q

Pearl’s framework

A
  • DAGs
  • graphics depicting variables that adhere to rules
  • strengths = visual, allows for specification of substantive knowledge based on theory and observation
  • weaknesses = potentially high knowledge requirements, no guidelines for identifying confounders
20
Q

Campbell’s framework of validities

A
  • validity typology
  • method: description and design
  • strengths = richness & detail of threats to inherence, emphasis on design and falsification
  • weaknesses= less emphasis on effect, lack of conciseness, no structure for the specification of threats