Epidemiology: Designing Epidemiological Studies 2 Flashcards

1
Q

2 Approaches for assessing whether an exposure is associated with an outcome

A

Experimental studies/clinical trials Observational studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What determines strength of association of an exposure and outcome

A

robustness of evidence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Observation study outline

A

observe the study sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Limitations of observation studies

A

may differ in many characteristics (including one being investigated)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the gold standards in terms of evidence

A

Experimental studies and clinical trials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the best type of clinical trials

A

Randomised control trials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe randomised controlled trials

A

Experimental studies which compare and assess the effectiveness of 2 or more treatments to see if one treatment is better than the other • Evaluates the impact of an intervention on an outcome • Tests 2 or more treatments to see which is better (treatment could be a drug or other method of care) • Always needs comparative group which acts as a control (existing treatment/placebo) Used to assess prescribed treatment • Include all patients in analysis regardless of whether they followed treatment - random allocation of individuals to receive one of 2 or more interventions (equal chance of interventions)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can we use existing treatments as controls in randomised controlled trials

A

Test a new treatment against the existing standard treatment to check if it is more effective or to see side effects and how common they are

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does information from the follow up of a control group in a randomised controlled trial allow

A
  • allows researchers to see whether the new treatments they are testing are any more or less effective than the existing treatment or placebo
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why is the choice of controls critical in a randomised controlled trial

A
  • maximises value of trial - needs to be a drug people use - needs to be correct dose
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How are randomised controlled trials characterised

A
  • through the use of randomisation - participants sorted into groups - experiment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Bias of randomised controlled trials

A
  • incorrect analysis of the data - intention to treat (analysis of patients according to group they were originally assigned) - not doing this can lead to bias - omitting those lost in study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Bias of randomised controlled trials

A
  • incorrect analysis of the data - intention to treat (analysis of patients according to group they were originally assigned) - not doing this can lead to bias - omitting those lost in study • Sources of bias: patient/clinical staff/physician/team interpreting results
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Should we include missing trial participants in our analysis of randomised controlled trials

A

must account for missing trial participants and include all participants in your analysis in their original groups assigned regardless or not if they follow their allocated intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Should we include Drop-out of participants / missing data in our analysis of randomised controlled trials

A

must account for missing trial participants and include all participants in your analysis in their original groups assigned regardless or not if they follow their allocated intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Is bias possible in randomised placebo-controlled trials

A

Yes - Although randomised controlled trials minimise bias, they are not free from bias!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Gold standard randomised controlled trials

A

Placebo controlled trials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is intention to treat analytic approach

A

analysis of patients according to group they were originally assigned

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The process of statistically analysing patients’ outcomes using the original groups to which they were allocated, irrespective of whether they took the medicine or not, is referred to as:

A

Intention to treat analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Are allocations in randomised controlled trials determined by investigation, clinicians and participants

A

NO - not predictable - lies in randomisation process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

2 steps in effective randomisation

A
  • Generation of the allocation sequence - Implementation of the allocation (allocation concealment)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How do we Generate allocation sequence in randomisation

A
  • Simple randomisation (eg. name in hats - for small studies) - random numbers table/computer software generated (generates random sequence and for bigger studies) - No. of participants large enough - everything controlled and kept the identical except treatment testing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How do we implement allocation (allocation concealment) in randomisation and why is it important

A
  • no one knows who receives which treatment - knowledge could influence whether patient is included or excluded based on perceived prognosis by researching clinician- subconsciously justify whether to include/not to include patient based on what they think will be their outcome Methods: - Sequentially numbered opaque envelopes (opened by recruiting clinician on participant enrolment) - should be truly opaque, sequentially numbered and should not be opened in advance and seal should not be broken ) - central randomisation - overcomes strategic scheduling which allows recruiters preferred allocation = leads to their preferred outcome - central randomisation services which issue treatment allocations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Which Bias does effective randomisation prevent

A

Selection Bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

The process of randomisation is a two-step process involving generation of the allocation sequence and then implementation of the allocation (involving concealment where possible). True or False

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

A randomised placebo-controlled double blinded trial always involves two study arms: True OR False

A

False = Many RCTs will involve three or possibly more arms. While a control (either placebo or the standard care) is required, it is possible to have more than one intervention group. For example, intervention group 1 might be 30mg of the drug, and intervention group 2 may be 60mg of the drug.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Define blinding

A
  • reduces bias - procedure where one or more parties in a trial are kept unaware of which treatment participants have been assigned to
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Define blinding in design/delivery of clinical trials

A
  • reduces bias (conscious or unconscious) - procedure where one or more parties in a trial are kept unaware of which treatment participants have been assigned to
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What are the different parties involved in a clinical trial which may be sources of bias

A

-Patient being treated -Clinical staff administering the treatment -Physician assessing the treatment -Team interpreting results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

2 types of blinding in clinical trials

A
  • single blinding -double blinding
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is single blinding

A
  • only one party blinded - usually only participants
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is a double blind study

A
  • when both parties; the participants and the study staff are blinded
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Why is blinding( double blind studies) useful

A
  • prejudice from participant and clinician removed - prevents performance and detection bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is performance bias

A

Performance bias refers to systematic differences between groups in care that is provided, or in exposure to factors other than the exposure of interest (eg. more focus on group given drug) - reduced with blinding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is detection bias

A

Detection bias refers to systematic differences between groups in how outcomes are determined (eg. more frequent exams, diagnostic tests = more positive ouctome) - reduced with blinding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Blinding prevents performance and detection bias, what else can it prevent?

A

Withdrawal from studies - withdrawal can lead to incomplete outcome data due to bias eg. patients may drop out if they know they have placebo and clinicians may indicate to patient if they know

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Sample size

A

Must determine before trial starts Calculate power of trial; power increases with increasing sample size - withdrawals and anticipated dropout rates should be considered when calculating sample size

38
Q

What is the power of a study and how does this affect sample size

A

ability to detect an effect or association if one truly exists - power increases with increasing sample size should aim for power to be at least 80, idEALLY 90%

39
Q

What does 90% power of study mean

A

90% means if the true difference between treatments is equal to the one we planned then there is only a 10% chance the study will not detect it)

40
Q

When may blinding in clinical trials not be possible

A
  • Certain techniques for example surgical techniques - Medication requiring titration(increasing/decreasing dose) - May be Ethically impossible to blind patient - Increased financial implications
41
Q

Single blind

A

A trial where the participants (the patients) are not aware which arm of the trial they are in (intervention vs. control).

42
Q

Double blind

A

A trial where the participants AND attending clinicians are not aware which arm of the trial participants are in.

43
Q

Triple blind

A

A trial where the participants / patients AND attending clinicians AND analysis team are not aware which arm of the trial participants are in.

44
Q

What is the step in randomisation that attempts to prevent persons involved in the trial from knowing the allocation of participants to study arms.

A

allocation concealment

45
Q

which bias is driving this in randomised controlled trials: Investigators subconsciously wanting positive results. 

A

Performance bias refers to systematic differences between groups in care that is provided, or in exposure to factors than the exposure of interest.

46
Q

which bias is driving this in randomised controlled trials: Inventors of new techniques or technologies wanting to validate their work. 

A

Performance bias refers to systematic differences between groups in care that is provided, or in exposure to factors than the exposure of interest.

47
Q

which bias is driving this in randomised controlled trials: Investigators standing to profit either indirectly through improving their reputation, or directly by having financial interests in the technologies concerned

A

Performance bias refers to systematic differences between groups in care that is provided, or in exposure to factors than the exposure of interest.

48
Q

The ability to detect a difference if a difference exists is defined as what

A

The statistical power of a study

49
Q

Describe the 3 main statistical factors that affect sample size

A
  1. Difference of interest: The difference between the groups that you’d be looking to investigate. eg. A smaller difference of interest needs a larger sample size (e.g. 5mmHg instead of 10mmHg for blood pressure needs a larger sample) - i.e small item needs bigger camera lens than big item 2. Power: aim for 80%. As you increase your power, your sample size will increase. 3. Alpha. How much you want to rule out chance causing a positive finding: it’s the equivalent of specifying the p-value and it’s also connected with one- and two-tailed testing. Decreasing alpha from 0.05 to 0.01 will increase your sample size. Also need to consider loss to follow-up. As a rule of thumb, it’s not unusual to lose 15% of your participants – and of course longer studies will have higher loss.
50
Q

Two types of error (limitations of statistical significance)

A

Type I and Type II

51
Q

Type I error

A

False positive finding - found with P values

52
Q

What is the p value

A

“the probability of obtaining a result as extreme as the observed results of a statistical test, assuming the null hypothesis is correct” p-value = probability of a false positive.

53
Q

Cons of p-value

A

-If multiple analyses are run, one is likely to show p=0.05 even if no actual difference (be cautious if 0.01

54
Q

Considerations when using p-value

A
  • Where you have p-values 0.01 - 0.10, interpret them with caution. Where you have consistent findings over multiple tests <0.05 or single tests <0.01 then it gives some reassurance that chance alone is less likely to be playing a part. - Always consider clinical significance separately. As a soon-to-be doctor, you will be qualified in making a judgment on whether the difference between groups is useful or not.
55
Q

Define type II error

A
  • false negative finding Statistical power of going from 80% to 90% will halve risk of a false negative
56
Q

Why may a low budget trial lead to a type II error

A

increasing the power will increase your sample size. That will increase the cost and complexity of your trial.

57
Q

Define Type I error

A

False positives

58
Q

Define Type II error

A

False negatives

59
Q

Which type of error should you be more concerned about in following scenario? In this open-label study single centre randomised placebo controlled trial, with a power of 95%, the investigator reports that although there was no difference in mortality (the primary endpoint) between groups receiving placebo and active drug, patients in the intervention recovered faster than those receiving placebo, leaving the intensive care unit on day 15 (as opposed to day 17 among those receiving the placebo, p=0.02). The investigator’s conclusion is that the drug may confer a benefit and therefore should be used.

A

Type I error = due to p value

60
Q

Which type of error should you be more concerned about in following scenario? In a double-blinded multicentre randomised controlled trial, which experienced considerable loss to follow-up, with post hoc power calculated at 65%, no difference was observed between the control group (receiving the standard pharmacotherapy) and the intervention group (standard pharmacotherapy plus the experimental drug). Both survival and adverse event profiles were similar between groups. The investigator proposes that the drug should not be used in the future.

A

Type II error = due to power

61
Q

Which type of error should you be more concerned about in following scenario? In a three-arm multicentre randomised control trial which used single blinding, patients were randomly allocated to standard therapy, standard therapy plus intermediate physical activity, and standard therapy plus intensive physical activity. Over 60 statistical tests were performed which suggested standard therapy plus intermediate physical activity among women aged 40-49 conferred greater cardiovascular survival benefit than in other sub-groups of the trial (p=0.009).

A

Type I error = due to p value

62
Q

what is a literature review

A

A literature review is a search and evaluation of the available literature in your given subject or chosen topic area - allows you to inform and practise evidence based medicine

63
Q

2 main types of literature review

A
  1. Narrative review 2. Systemic review
64
Q

What is the narrative review

A

AKA. literature review/scoping review/non systematic review/ critical review • Puts published literature into a single article enables reader to rapidly understand issues

65
Q

Systemic review

A

Often presented as the underpinning basis for meta-analysis, but also exists separately. - similar to narrative review but Sets out a highly structured approach to searching, sifting, including and summarising the literature.

66
Q

Pros and cons of narrative review

A

Pros • Easier and faster to write • Good for when beginning to look at a topic which is totally new (limited research or when there is a lot of variation in research) - useful in work from different disciplines being brought together (broad research question) Cons • Bias – authors are free to select works and can be unbalanced (work may support their opinion) - no search is being specified so evidence could be missed by chance

67
Q

Systematic reviews

A
  • one of the strongest forms of evidence • Highly structured approach to searching + including + summarising the literature • Basis for meta-analysis (though exists separately) Process Research question → structured search → categorise indices + registries → screening (PRISMA flow diagram) → reporting → writing → submission for peer review and publication
68
Q

Pros and cons of systematic review

A

Pros • Aims to collate all available evidence • Specific protocol + inclusion criteria enables reproducibility Cons • Only as good as the method employed/indices used/evidence incorporated • Very quickly out of date as time consuming – look at search date not publication date! - lengthy

69
Q

what is a systematic indices

A

eg. Medline, mbase, psychinfo - need to be explicit about indices used and time period - based on published research registries - early stage of process of research - yet to be completed or published research

70
Q

Why are registries important in research

A
  • studies registered beforehand - to avoid duplication or omission - reduces risk of publication bias ( not reporting negative findings and over reporting positive findings)
71
Q

What is a prisma diagram in screening of a systematic review (applying inclusion criteria)

A
  • shows how many articles have been found through initial search - how many articles removed as duplicates as cover multiple indices - screening looking at abstracts or titles before full text reviews - eligibilities - work out how many studies included in meta analyses of systematic review eg. 2500 articles initially identified but in screening go down to like 25 articles included - reviewing and filtering down to individual articles included (sift through evidence and screen out ones not needed by inclusion and exclusion criteria) A PRISMA flow diagram is often used to show the ‘n’ at each stage of screening
72
Q

Limitations of systematic review

A

LIMITATIONS can miss certain literature: - Only as good as the method employed: a less-than-comprehensive search structure may not return all the evidence. - Only as good as the indices searched. - Only as good as the evidence that it incorporates. - Very quickly out of date - often exacerbated by the delay in publishing. Look at the search date, not the publication date!

73
Q

Define Grey literature or grey information

A

Not everything that is known is necessarily in the published literature: that which has been reviewed and published within academic journals that can be searched via the online indices. eg, government reports

74
Q

2 approaches to grey literature

A
  • Google scholar - open grey - typically not peer reviewed so be cautious and lack of evidence
75
Q

Define Meta analyses

A

Meta-analysis is a quantitative, formal, epidemiological study design used to systematically assess previous research studies to derive conclusions about that body of research same principles as systematic review but with numbers - extract results from many studies - add together results - collectively better results

76
Q

What is pooling (pooled estimate of association) in meta analyses

A

Meta-analysis combines the quantitative findings from separate studies into a pooled estimate

77
Q

How do we communicate findings in a meta analyses

A

Forest plot

78
Q

How do we communicate findings in a meta analyses

A

Forest plot - size of square indicates number of participants - with larger sample size, the whiskers shrink - position of shape away from vertical line means effect (vertical line is of no effect) left = attenuation right = positive effect square = estimate of relative risk (size proportional to no. of people included in study) Horizontal lines = level of confidence/uncertainty around estimates Diamond = pooled estimate horizontal axis = logarithmic scale - means narrow range of values

79
Q

How are the Weight of studies in forest plot

A

weight of studies proportional to size of study overall

80
Q

Considerations in reviewing meta-analyses

A

Reviewing meta-analysis Sources of heterogeneity (differences between studies): 1) Clinical – patients/selection criteria 2) Methodological - study design/blinding/intervention approach 3) Statistical – reporting differences Weighting 1) Fixed effects 2) Random effects Publication bias • Studies with more positive findings are more likely to be submitted/published • Publication funnel plot can be used to assess balance/likelihood of publication bias arising in sample included

81
Q

Limitations of meta analyses

A

-similar to systematic review = depends on method and quality of studies you include - levels of heterogeneity included can give rise to problems - sometimes more data can be asked for to original authors NOTE: important to do meta analyses with systematic reviews to make sure good studies included

82
Q

Define trial endpoint

A
  • an outcome that usually describes a clinically meaningful outcome. eg. a common clinical endpoint in cancer trials will be survival – or possibly survival at 12 months or five years. This would be a measure of efficacy
83
Q

Define efficacy in clinical trial outcomes

A

Efficacy – how well a therapy works in achieving a desired outcome.

84
Q

Define safety in clinical trial outcomes

A

Safety – how well a therapy works in not causing adverse events.

85
Q

2 types of efficacy endpoint

A

primary and secondary endpoint

86
Q

Define primary endpoint

A

Primary endpoint – this is the endpoint for which the study has been powered; that is to say that the number of trial participants (sample size) will have been recruited on the basis of the pre-specified power and difference.

87
Q

Define secondary endpoint

A

Secondary endpoint – study to examine a slightly different endpoint in addition to the primary endpoint. eg. while a study seeks to examine survival (i.e. alive or dead) another – often ‘softer’ - measure such as recurrence of disease or hospital admission might also be measured. If the secondary endpoint is proven but the primary endpoint is not, then the findings of the study may still contribute to the understanding of disease.

88
Q

Define safety endpoints

A

Safety= how well a therapy works in not causing adverse effects • Will have to judge whether the safety profile (or lack thereof) of a therapy is offset by the efficacy Eg. could be anaphylaxis or direct mortality associated with the therapy. Such major issues should usually be detected early in the trial process (before its rolled out to large numbers of patients). But more commonly the safety endpoints will be more nuanced: potentially measuring commonly observed adverse events (AEs) and grading them into a hierarchy of significance. A large proportion of patients reporting AEs will require investigation.

89
Q

Composite endpoint

A
  • multiple potential endpoints have been added together. - particularly common when an outcome is uncommon. eg. one might combine myocardial infarction and ischemic stroke to give a new composite endpoint of ‘cardiovascular event’. • E.g. MI + ischaemic stroke = cardiovascular event
90
Q

Define survival analysis

A

Survival analysis • Alternative to using arbitrary timepoints to see if trial participants are alive (hazard ratio) • Loses statistical precision • Instead, whenever a patient dies, their death will be recorded at that time and a survival time will be calculated • End up with a range of survival times

91
Q

What is a kaplan meier plot

A

Kaplan Meier plot – displays survival analysis
• Shows proportion of participants alive at any particular time

92
Q

Key points

A

Key points

a. Randomised interventional (or experimental) trials are necessary for us to control for

bias that is unavoidable in observational studies; however interventional study designs

commonly involve greater risk and ethical challenges.

b. The process of randomisation involves two steps: sequence generation then

implementation (which necessitates allocation concealment). Subsequently, and

where possible, investigators, clinicians and trial participants should be blinded from

knowing which arm of the study participants have been allocated to.

c. Sample size, power, difference of interest, alpha and endpoints need to be considered

when planning, implementing and reporting an interventional study.

d. Systematic and non-systematic reviews are commonly used approaches to collating

related evidence – and can be used for observational or interventional studies. They

are only as good as the studies that are considered.

e. Meta-analysis is a quantitative epidemiological study design (that can be built upon a

systematic review) that pools statistical findings from similar studies and can report a

pooled estimate of association / effect. However, it is only as good as the studies

included and the implementation of the meta-analysis itself.