Lecture 3 Flashcards
Randomized-control trials
Maximize validity and minimize bias
Internal validity is maximized by having a control group and randomizing all participants
Bias can significantly decrease internal validity
Bias – systematic errors that can occur during the implementation of a study
Define bias
Systematic errors that can occur during the implementation of a study
List and describe selection bias, history bias, and maturation types of bias
1) Selection bias
Preferential enrollment of specific patients into one treatment group over another
Baseline demographics between control and experimental group should be comparable
Patients must have equal probabilities to be allocated to the treatment or control arms
2) History bias
External events that occur during the course of the study (e.g., death of a family member or losing a job)
3) Maturation bias: Normal changes in study participants over time
Define attrition bias, testing bias, and instrumentation bias
1) Attrition bias: Differential dropout of patients from the treatment and/or control groups
2) Testing bias: Studies that require participants to take tests or participate in their own assessment repeatedly over time are susceptible to problems with internal validity due to potential improvements that can occur simply by repeated testing
3) Instrumentation bias: Changes in the sensitivity of the instrument, improvements in technology, and changes in the measurement techniques over time
Bias; define:
1) Regression to the mean
2) Investigator bias
3) Ascertainment bias
4) Detection bias
1) Regression to the mean: Phenomenon where initial measurements of a variable are extremely different, either higher or lower, from the population mean, but then subsequent measurements are closer to the average
2) Investigator bias: Errors in study design, implementation, or analysis by the investigator
3) Ascertainment bias: Effects of intervention can be exaggerated if the investigators choose only those time points where the measured outcomes show the most benefit for the intervention, and ignore the data showing less impact of the intervention on outcomes
4) Detection bias: Systematic differences between groups in how outcomes are determined
What is dr lewis’s favorite kind of bias?
Investigator bias
Give a study with an example of selection bias
Chaibi A, Benth J, Tuchin PJ, Russell MB. Chiropractic spinal manipulative therapy for migraine: a three‐armed, single‐blinded, placebo, randomized controlled trial. Eur J of Neurol. 2016;24(1):143-153. doi:10.1111/ene.13166
3 arm study, single blinded, there’s a placebo, control, and study group. They put people with lower headache days into the intervention group. Showed that the people getting chiropractor care vs placebo were getting effects.
They put more males into the chiropractor group, etc shows that CSMT group got more favorable patients. Investigator bias.
What factors decrease external validity? Describe each
Subject selection: Selected subjects differ from that of others within the general population
“How can I apply the results to my patients’ care?”
Treatment: “Can the treatment protocol be used practically?”
Setting: If the trial occurred in-patient, would the results be applicable for out-patient?
Historical factors: Results of past studies may no longer apply
Multiple treatments: How many variables are allowed into the clinical trial?
What other factors are important to consider for RCTs?
Randomization
Allocation concealment
Blinding
Sample size
Research protocol
Participants
Study design
Study measures
Analysis
Randomization: List and describe 4 types
1) Simple randomization: Random number generator to allocate participants
Can lead to treatment arms having unequal numbers of test subjects
2) Block randomization: Total number of subjects to be enrolled in the study is divided into a series of “blocks”
Arms will have comparable number of participants
3) Stratified randomization: Process of ensuring certain baseline characteristics are equal between the groups of a study
Patients are divided into different strata and then randomized using blocks
In very large trials, stratification may not be required
4) Group or cluster randomization: Group of subjects are selected for randomization
Treatment ratios are also called what?
Treatment ratios = permutations
Allocation Concealment
Allocation concealment occurs when those enrolling patients into the study are prevented from knowing which group the patients are allocated within the study
Decreases risk of selection bias
Occurs prior to the patients being enrolled in the study and stops once they are enrolled
Blinding
Blinding or masking is a process by which those involved in the trial are unaware of what treatment the patients are receiving
Blinding is used to help prevent biases due to investigator, Hawthorne, placebo, and detection
Starts as soon as the patients are enrolled in the study
Blinding: List and describe 4 types
1) Open label
No blinding
Potential for patient reporting and investigator bias
May see open label in Phase 1 drug trials
2) Single-blind
Only one set of individuals is unaware of what the patients are receiving
3) Double-blind
Two sets of individuals are unaware of what the patients are receiving
4) Triple-blind
Patients, providers and data analysist are all blinded
Blinding: Describe Double-dummy
1) Double-dummy: More than one placebo is used to help the treatments look the same in all the groups
2) Hypothetical example:
Non-inferiority drug trial comparing two drug formulations of the same triptan and its efficacy
Comparing sumatriptan PO vs. sumatriptan SUBQ
Arm 1: Sumatriptan PO and placebo SUBQ
Arm 2: Placebo PO and sumatriptan SUBQ
Sample Size:
1) What is the goal?
2) What is effect size? When is it estimated?
1) Goal is to determine the appropriate number of subjects that are needed to test the primary study hypothesis
2) Statistical estimation of the magnitude of effect due to treatment or the association between two or more variables that is likely to occur
Usually estimated at the beginning of the study, and the sample size is calculated taking the effect size into consideration
Sample size:
1) What is power?
2) Describe power
3) What do you want power to be?
1) Measures the capacity to detect a difference in the study groups if a true difference exists
2) Studies with smaller numbers of subjects often suffer from low power, making it more likely to fail to find differences that truly exist (false negative)
3) Studies are typically designed to have 80% power to detect the difference in treatments equal to the “effect size”
If power is less than 80% and you don’t know what else to say, what should you say?
Investigator bias
Research Protocol:
1) What is it?
2) How is it designed?
3) When is an IRB required?
4) How long are research articles and journal protocols?
1) Standardized document that provides instructions to the investigators on all aspects of carrying out the study
2) The protocol is designed so that all investigators can understand and implement the study in the same manner at a single site or multiple study sites
3) As mandated by federal regulations, all protocols involving human subjects must also be approved by an institutional review board (IRB)
4) A journal article may be 4 to 7 pages
A research protocol may be 200 to 300 pages
Participants; define the following:
1) Target population
2) Study sample
3) Inclusion/ exclusion criteria
4) Recruitment
1) Target population: Group of people with desired clinical and demographic characteristics that will ultimately benefit from generalization of the study findings
2) Study sample: Refers to a more specific subset of the target population that participates in a study.
3) Inclusion / exclusion criteria: Studies with fewer exclusion criteria are more likely to be generalizable than those with an extensive list of exclusion criteria
4) Recruitment: The recruitment strategy is also a requirement for IRB approval, and is based on ethical principles
Patients may also take an active role in seeking out clinical trials
https://vanderbilt.trialstoday.org/
What tells you the external validity of your study?
Inclusion/ exclusion criteria
What does clinicaltrials.gov show?
What ethical considerations were applied to the exclusion criteria? Which variables are excluded as confounders?
Inclusion/ exclusion criteria
Show clinicaltrials.gov
Participants: Defining control groups
1) Placebo concurrent control
2) Active control: Standards of care applied to control and interventional arms
3) Historical control: Group of participants receiving the intervention may be compared to an external group of patients that were observed at a different time (historical control) or in a different treatment setting
-Generally, historical controls are not well accepted in the scientific community due to significant internal validity concerns