SUMMARISING THE EVIDENCE - LEARNING OUTCOMES Flashcards

1
Q

What is meant by the ‘cause’ of disease?

A

A cause is an exposure that produces the disease.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What studies aim to determine the cause of disease?

A

Epidemiological studies compare diseased with healthy people with respect to the exposure to attempt to determine the cause of disease. We need heterogeneity in the exposure population in order to detect this - need variation in our exposure variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why might identifying the cause of disease be difficult?

A
  • Few diseases arise from a single ‘cause’
  • Some exposures ‘cause’ multipole diseases
  • Misclassification of disease and exposure status in epidemiological studies
  • Latency (lead time) between exposure and disease makes causal relationships hard to pull apart

Hence epidemiological studies tend to identify risk factors rather than causes - especially when talking about complex human diseases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define what is meant by the term ‘risk factor’.

A

A risk factor is an exposure that increases risk of disease. Individuals exposed to risk factors have a higher risk of disease.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why is a risk factor not necessarily a cause of disease?

A

Link may actually be due to:

  1. Chance - for p of 0.05 5% of the time the conclusions will be false positive results
  2. Indirect Causation - Risk factor A may cause B, and then B is what actually causes disease - risk factor A is associated with the disease but is not the direct causal factor
  3. Reverse Causation - The disease may actually be causing the association with the risk factor - for example cancer causing weight loss.
  4. Confounding - Confounder C may be independently related to both Risk factor A and disease
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do we establish cause when there are so many other factors that could explain the associations between exposure and disease?

A

Bradford-Hill developed 9 criteria in 1965 to help us assess causation. Beware though, even the most consistent evidence doesn’t exclude indirect causation, reverse causation or confounding. We need to think of ways we can minimise these in the study design.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe the 1st Bradford-Hill criteria; strength of association.

A

Strong associations are more likely to be causal than weak ones (error/bias).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a strong association?

A

Traditionally an odds ratio or a risk ratio of 2 or more - this is not a magic number and don’t discount small effects. Well designed and implemented studies can detect small effect sizes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe the 2nd Bradford-Hill criteria; consistency.

A

Multiple studies with different designs consistently show the same effect. Consistency and replication vital before relationship can be trusted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe the 3rd Bradford-Hill criteria; Is the effect specific?

A

A particular exposure should lead to a single disease. Therapies that claim to cure everything probably cure nothing. Specificity can be exploited in the design of research studies - we can include questions about factors specifically related to our outcome and factors that are not specifically related to our outcome. This can act as an internal control in our study.

Specificity is not however a perfect indicator of causality. For example smoking causes more diseases than just lung cancer, Aspirin is effective for a wide range of diseases. Don’t discount non-specific effects, but be cautious.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Describe the 4th Bradford-Hill criteria; Temporality.

A

The cause must precede the onset of disease.

Early undetected stages of a disease may have caused an apparent exposure for disease - e.g. weight loss and cancer where early stages may cause loss of appetite and weight loss.

Smoking must have been started prior to developing cancer for example.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe the 5th Bradford-Hill criteria; Biological gradient.

A

Is there a biological gradient? Is there a dose-response relationship between the exposure and outcome?

If there really is a causal effect between exposure and outcome then the outcome will increase with the level of exposure. Just because we see a dose-response relationship doesn’t mean that there isn’t another confounding factor in there that we haven’t taken into account.

Not all treatments may show a dose-response pattern. For drugs in particular there may be a threshold level for effectiveness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Describe the 6th Bradford-Hill criteria; Plausibility.

A

Is the association biologically plausible?

May or may not be helpful as the biological pathways for many exposures may not be known at the time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Describe the 7th Bradford-Hill criteria; Coherence.

A

Does the association fit with what is already known? Very similar to plausibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe the 8th Bradford-Hill criteria; Experimental evidence.

A

Does intervention in exposure influence disease outcome? we need an intervention study to see this like an RCT, but this is unethical to conduct if the exposure is a suspected cause of disease. Hence we have to intervene to reduce exposure instead - does this reduce disease?

Remember, experimental evidence and p-values aren’t necessarily the same thing. You don’t need a significant p-value to provide experimental evidence because you can get non-significant p-values if the study isn’t well enough designed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe the 9th Bradford-Hill criteria; Analogy.

A

Quite similar to plausibility and coherence. Does a similar cause-effect relation exist already?

Not especially helpful - can be quite difficult to find existing examples.

17
Q

Summarise causation.

A

It is difficult to assess causation in epidemiological studies. What we actually do is identify risk factors. We can use the Bradford-Hill criteria to assess if causation is likely.

What we need is a prospective study to ‘prove’ causation -really what we are talking about is replicating the scientific results enough times and in enough ways to prove causation.

18
Q

What is a systematic review?

A

A systematic review is an overview of all primary studies that contain explicit statements of objectives, materials, and methods and has been conducted according to explicit and reproducible methodology.

A systematic review is essentially a big literature review.

19
Q

What is a meta-analysis?

A

A meta-analysis is a study involving the quantitative pooling of data from two or more studies in order to generate a new summary statistic. Meta-analyses are subsets of systematic reviews.

20
Q

Describe the steps taken when conducting a systematic review.

A
  1. Frame question
  2. Identify relevant literature
  3. Assess quality of literature
  4. Summarise the evidence
  5. Interpret the findings
21
Q

What is involved in step 2 of the systematic review process; identifying relevant literature?

A

You need to perform a comprehensive search strategy to identify literature:

  • Publication databases (e.g. MEDLINE)
  • Internet
  • Trial Registers
  • References from published studies and reviews
  • Unpublished literature (contact authors)
  • Conference proceedings (abstracts of studies

Ideally, no language restrictions should be imposed.

22
Q

What is involved in step 3 of the systematic review process; assessing quality of literature?

A

Assessing the quality of the literature usually covers three areas:

  1. Selection of the groups
  2. Comparability of the groups
  3. Ascertainment of exposure for case-control studies and outcome for cohort studies

For quality we want to consider that if we include poor studies, we will get poor results out.

There are formal scaled for assessing quality. For example the Jadad scale gives us a score of between 3 and 5 depending on certain criteria. We could use a formal scale like this to determine a cut-off for inclusion into a meta-analysis or systematic review. A systematic review should be like any other piece of research in that you should be clear and specific about the methodology used and the inclusion criteria / exclusion criteria.

23
Q

Describe the Jadad scale that can be used for assessing the quality of randomised control trials for a systematic review.

A
  1. Did they use randomisation? Was the method described and appropriate?
  2. Was the study double-blind? Was the method described and appropriate?
  3. Did they analyse everyone recruited? Description of withdrawals / dropouts.
24
Q

What is involved in step 4 of the systematic review; summarising the evidence?

A
  1. Qualitative summary
  2. Quantitative summary / meta-analysis - statistical method for combining the results from the identified studies which gives a pooled measure of risk
25
Q

What are the stages involved in a meta-analysis?

A
  1. Calculate a measure of effect/risk and confidence interval for each study
  2. Display the results graphically - Forest Plot
  3. Assess heterogeneity
  4. Pool the data
26
Q

What does the central square box in a forest plot represent? What do the horizontal lines in a forest plot represent?

A

The risk ratio on the x axis. The larger the study the larger the central box.

The horizontal lines extending out either side of the box represent the 95% confidence interval for the study.

A diamond on a forest plot shows the new summary statistic that we have calculated.

27
Q

What is heterogeneity as assessed in step 3 of meta-analysis? How do we examine heterogeneity?

A

Heterogeneity is the variation between the results of the included studies. To look for heterogeneity we first examine the forest plot - do the confidence intervals of the studies overlap with the pooled confidence intervals? We can also use a more formal method using I^2 as a way of calculating the variation that we are seeing in these studies. It is important to consider whether the study results have been adjusted for possible confounding factors.

28
Q

If we are using I^2 how much heterogeneity can we tolerate in a meta analysis?

A
I^2 = 0% means there is no variation
I^2 = 50% means that 50% of the variation in the meta analysis is due to heterogeneity
I^2 = 100% means that 100% of the variation in the meta analysis is due to heterogeneity - all the variation we see is essentially because the studies are all telling us completely different things.

Conventionally it is agreed that we shouldn’t do a meta analysis if I^2 = 85-100%

29
Q

What should we do if there is some heterogeneity in our systematic review data?

A

If we have an I^2 of 0%-40% we can used a ‘fixed effect’ method of meta-analysis.

If the I^2 is 40%-85% we can use a ‘random effect’ method of meta-analysis.

It is important to get the method of analysis correct as the two methods may give us different results - one method may tell us we have a highly significant risk change whereas the other doesn’t show any significant change.

30
Q

For what values of I^2 should we use a ‘fixed effect’ method of meta analysis?

A

If we have an I^2 of 0%-40% we can used a ‘fixed effect’ method of meta-analysis.

31
Q

For what values of I^2 should we use a ‘random effect’ method of meta analysis?

A

If the I^2 is 40%-85% we can use a ‘random effect’ method of meta-analysis.

32
Q

What is publication bias?

A

‘Positive’ studies are more likely to be published. Always think about trying to assess publication bias in your systematic review. Publication bias can be related to time or study size. If there is absolutely no publication bias at all we would expect to see random effect in the small studies and less scatter in the larger studies - this can be shown on a funnel plot.

33
Q

Summarise systematic reviews and meta analyses.

A
  • Powerful tools for summarising evidence
  • Beware of potential limitations - what studies are included - potential for publication bias
  • There are some occasions where you can’t combine studies because their methodology or questions are too different. Are you combining apples and oranges?