Lecture 3 Flashcards

1
Q

Randomized-control trials

A

Maximize validity and minimize bias
Internal validity is maximized by having a control group and randomizing all participants
Bias can significantly decrease internal validity
Bias – systematic errors that can occur during the implementation of a study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define bias

A

Systematic errors that can occur during the implementation of a study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

List and describe selection bias, history bias, and maturation types of bias

A

1) Selection bias
Preferential enrollment of specific patients into one treatment group over another
Baseline demographics between control and experimental group should be comparable
Patients must have equal probabilities to be allocated to the treatment or control arms
2) History bias
External events that occur during the course of the study (e.g., death of a family member or losing a job)
3) Maturation bias: Normal changes in study participants over time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define attrition bias, testing bias, and instrumentation bias

A

1) Attrition bias: Differential dropout of patients from the treatment and/or control groups
2) Testing bias: Studies that require participants to take tests or participate in their own assessment repeatedly over time are susceptible to problems with internal validity due to potential improvements that can occur simply by repeated testing
3) Instrumentation bias: Changes in the sensitivity of the instrument, improvements in technology, and changes in the measurement techniques over time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Bias; define:
1) Regression to the mean
2) Investigator bias
3) Ascertainment bias
4) Detection bias

A

1) Regression to the mean: Phenomenon where initial measurements of a variable are extremely different, either higher or lower, from the population mean, but then subsequent measurements are closer to the average
2) Investigator bias: Errors in study design, implementation, or analysis by the investigator
3) Ascertainment bias: Effects of intervention can be exaggerated if the investigators choose only those time points where the measured outcomes show the most benefit for the intervention, and ignore the data showing less impact of the intervention on outcomes
4) Detection bias: Systematic differences between groups in how outcomes are determined

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is dr lewis’s favorite kind of bias?

A

Investigator bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Give a study with an example of selection bias

A

Chaibi A, Benth J, Tuchin PJ, Russell MB. Chiropractic spinal manipulative therapy for migraine: a three‐armed, single‐blinded, placebo, randomized controlled trial. Eur J of Neurol. 2016;24(1):143-153. doi:10.1111/ene.13166

3 arm study, single blinded, there’s a placebo, control, and study group. They put people with lower headache days into the intervention group. Showed that the people getting chiropractor care vs placebo were getting effects.
They put more males into the chiropractor group, etc shows that CSMT group got more favorable patients. Investigator bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What factors decrease external validity? Describe each

A

Subject selection: Selected subjects differ from that of others within the general population
“How can I apply the results to my patients’ care?”
Treatment: “Can the treatment protocol be used practically?”
Setting: If the trial occurred in-patient, would the results be applicable for out-patient?
Historical factors: Results of past studies may no longer apply
Multiple treatments: How many variables are allowed into the clinical trial?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What other factors are important to consider for RCTs?

A

Randomization
Allocation concealment
Blinding
Sample size
Research protocol
Participants
Study design
Study measures
Analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Randomization: List and describe 4 types

A

1) Simple randomization: Random number generator to allocate participants
Can lead to treatment arms having unequal numbers of test subjects
2) Block randomization: Total number of subjects to be enrolled in the study is divided into a series of “blocks”
Arms will have comparable number of participants
3) Stratified randomization: Process of ensuring certain baseline characteristics are equal between the groups of a study
Patients are divided into different strata and then randomized using blocks
In very large trials, stratification may not be required
4) Group or cluster randomization: Group of subjects are selected for randomization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Treatment ratios are also called what?

A

Treatment ratios = permutations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Allocation Concealment

A

Allocation concealment occurs when those enrolling patients into the study are prevented from knowing which group the patients are allocated within the study
Decreases risk of selection bias
Occurs prior to the patients being enrolled in the study and stops once they are enrolled

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Blinding

A

Blinding or masking is a process by which those involved in the trial are unaware of what treatment the patients are receiving
Blinding is used to help prevent biases due to investigator, Hawthorne, placebo, and detection
Starts as soon as the patients are enrolled in the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Blinding: List and describe 4 types

A

1) Open label
No blinding
Potential for patient reporting and investigator bias
May see open label in Phase 1 drug trials
2) Single-blind
Only one set of individuals is unaware of what the patients are receiving
3) Double-blind
Two sets of individuals are unaware of what the patients are receiving
4) Triple-blind
Patients, providers and data analysist are all blinded

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Blinding: Describe Double-dummy

A

1) Double-dummy: More than one placebo is used to help the treatments look the same in all the groups
2) Hypothetical example:
Non-inferiority drug trial comparing two drug formulations of the same triptan and its efficacy
Comparing sumatriptan PO vs. sumatriptan SUBQ
Arm 1: Sumatriptan PO and placebo SUBQ
Arm 2: Placebo PO and sumatriptan SUBQ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Sample Size:
1) What is the goal?
2) What is effect size? When is it estimated?

A

1) Goal is to determine the appropriate number of subjects that are needed to test the primary study hypothesis
2) Statistical estimation of the magnitude of effect due to treatment or the association between two or more variables that is likely to occur
Usually estimated at the beginning of the study, and the sample size is calculated taking the effect size into consideration

17
Q

Sample size:
1) What is power?
2) Describe power
3) What do you want power to be?

A

1) Measures the capacity to detect a difference in the study groups if a true difference exists
2) Studies with smaller numbers of subjects often suffer from low power, making it more likely to fail to find differences that truly exist (false negative)
3) Studies are typically designed to have 80% power to detect the difference in treatments equal to the “effect size”

18
Q

If power is less than 80% and you don’t know what else to say, what should you say?

A

Investigator bias

19
Q

Research Protocol:
1) What is it?
2) How is it designed?
3) When is an IRB required?
4) How long are research articles and journal protocols?

A

1) Standardized document that provides instructions to the investigators on all aspects of carrying out the study
2) The protocol is designed so that all investigators can understand and implement the study in the same manner at a single site or multiple study sites
3) As mandated by federal regulations, all protocols involving human subjects must also be approved by an institutional review board (IRB)
4) A journal article may be 4 to 7 pages
A research protocol may be 200 to 300 pages

20
Q

Participants; define the following:
1) Target population
2) Study sample
3) Inclusion/ exclusion criteria
4) Recruitment

A

1) Target population: Group of people with desired clinical and demographic characteristics that will ultimately benefit from generalization of the study findings
2) Study sample: Refers to a more specific subset of the target population that participates in a study.
3) Inclusion / exclusion criteria: Studies with fewer exclusion criteria are more likely to be generalizable than those with an extensive list of exclusion criteria
4) Recruitment: The recruitment strategy is also a requirement for IRB approval, and is based on ethical principles
Patients may also take an active role in seeking out clinical trials
https://vanderbilt.trialstoday.org/

21
Q

What tells you the external validity of your study?

A

Inclusion/ exclusion criteria

22
Q

What does clinicaltrials.gov show?

A

What ethical considerations were applied to the exclusion criteria? Which variables are excluded as confounders?
Inclusion/ exclusion criteria

23
Q

Show clinicaltrials.gov

24
Q

Participants: Defining control groups

A

1) Placebo concurrent control
2) Active control: Standards of care applied to control and interventional arms
3) Historical control: Group of participants receiving the intervention may be compared to an external group of patients that were observed at a different time (historical control) or in a different treatment setting
-Generally, historical controls are not well accepted in the scientific community due to significant internal validity concerns

25
Study Design: 1) Define parallel 2) Define and describe crossover
1) Parallel: Each subject is randomized to either a treatment group or a control group 2) Crossover: Each subject receives all of the interventions based on a specified sequence of events a) Carryover and washout -Are the effects from the first intervention obscuring the true effect of the second intervention? -Hypothetical study: SSRI & MAOi crossover design without washout would obscure the outcomes and put the patient at risk for serotonin syndrome
26
Study Design: 1) Define factorial design 2) Define adaptive design 3) What is another design?
1) Factorial design: Designed to evaluate multiple interventions in a single experiment 2) Adaptive design: Process of assigning patients to a treatment group based on previous success of the treatment as the trial progresses 3) Noninferiority and superiority trials
27
What is another word for dropout?
Attrition
28
Show a Parallel Randomized-Clinical Trial.
29
Show a Crossover Randomized-Clinical Trial.
30
Describe baseline measurements
Baseline Collect baseline information on all study participants in a clinical trial prior to randomization Baseline information is also important for subgroup analysis of the primary and secondary outcome variables Baseline characteristics help us determine how generalizable the data will be to the rest of the target population
31
What is the first step of measurements ?
Baseline measurements
32
Measurements: Outcomes 1) Primary 2) Secondary 3) Surrogate
1) Primary outcome: One outcome Specified before the trial begins (ad hoc) and forms the basis for the main study hypothesis and sample size calculation 2) Secondary outcomes: Several outcomes Can collect data on this information, but study design is not intended to show cause / effect relationships with secondary outcomes and intervention 3) Surrogate endpoints: Can be primary or secondary outcome, but most appropriate as secondary outcome Example: Assessing CD4 cell counts and HIV viral load to assess the efficacy of antiretroviral therapy in patients with HIV
33
Safety monitoring: What board? Describe
Independent Data and Safety Monitoring Board (DSMB) Committee of scientists not associated with the conduct of the study, who evaluate adverse events at regularly scheduled intervals during the course of the study and provide feedback to the investigator and the IRB regarding continuation of the study as planned
34
Describe analysis
Evaluation of the study endpoints usually occurs after all assessments are completed Interim analysis plans are often included in protocols that allow for stopping the trial early due to clinically meaningful efficacy or major safety concerns The decision to conduct an interim analysis is made by the external scientific review board (or the DSMB) at prespecified intervals based on statistical principles related to assumptions about the differences expected between interventions The conditions under which a study is stopped early should be specified prior to the implementation of the study
35
Intention-to-treat analysis 1) What does it do? 2) Who is still analyzed? 3) What does this require of researchers? 4) What is the main reason to use? 5) What does it mimic?
1) Analyzes patients as if they completed the study in their originally assigned group 2) If patients are allotted to a high-dose group, but then drop out or took a lower dose for safety reasons, they are still analyzed as part of the high-dose group 3) In order to successfully complete this analysis, the researchers will need to impute missing data based on how the data are missing in the study 4) The main reasons the intent-to-treat analysis is utilized are to maintain the randomization and account for attrition 5) Mimics what will occur in the real-world (preferred)
36
Describe Per protocol analysis
Only those subjects who completed all aspects of the protocol are evaluated A pitfall of this type of analysis is that any exclusion of patients compromises the randomization and does not account for significant patient dropouts, which may lead to bias in the results
37
Which is better intention to treat analysis or per protocol?
intention to treat analysis
38