Summarising the Evidence Flashcards

1
Q

What is a cause of disease?

A

The causation of many diseases comprises a combination of genetically susceptibility and environmental exposure which leads, over a period of time, to the emergence of clinical disease.

A cause of disease is an exposure (which may be genetic or environmental) that produces disease. Epidemiological studies identify such causes by comparing diseased individuals with healthy individuals. One obvious prerequisite for this approach to succeed is that there is heterogeneity in exposure in the population studied; if everyone is exposed then the effects of that exposure cannot be assessed. If everyone in the UK smoked 20 cigarettes a day, then all cases and all controls would be smokers of 20 cigarettes per day and there would be no obvious association between cigarette smoking and lung cancer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

If the association between causes and disease were really a simple one-to-all, all-or-nothing relationship then what would epidemiological studies would be expected to find?

A

If the association between causes and disease were really a simple one-to-all, all-or-nothing relationship then epidemiological studies would be expected, for example, to demonstrate that smoking causes lung cancer by showing that all exposed individuals have disease, and all unexposed individuals do not. In reality of course, this is never the case, for reasons that include the following:

  • Few if any diseases arise from a single cause, and some exposures cause more than one disease. Aside from single gene defects, the relationship between exposure and disease is usually far from absolute.
  • Epidemiological studies will inevitably misclassify both disease and exposure status, resulting in a weakening of the apparent relation between them.
  • It takes time (latency) for exposures to lead to disease, so some exposed individuals who will in time develop disease will appear to be healthy when a study is carried out.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What do epidemiological studies usually do?

A

What epidemiological studies usually do is identify risk factors, or exposures that increase the risk of disease. There may be many risk factors for a given disease, e.g. for asthma the list of identified risk factors includes allergy, birth order, maternal allergy, dust mite exposure, vaccinations, antibiotic use, maternal age, smoking, obesity, dietary antioxidant intake etc. Not all individuals are necessarily exposed to any or all of these factors, but in general, those who are exposed carry a higher risk of disease than those who are not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

For what reasons might risk factors for disease not necessarily cause the disease?

A

Risk factors for disease are not necessarily causes of disease because of the following:

  1. Chance: An apparent association between an exposure and a disease can arise by chance (a false positive association). Conventional statistical methods are designed to limit the likelihood of this to the 5% level, but false positive associations will still arise, particularly in studies in which multiple comparisons or hypothesis tests have been carried out. This can be easy to detect and a judgment about the potential relevance made when authors make the analyses they have carried out in their published report. This is not so easy when this process has gone on covertly in the process of producing a paper, but is not acknowledged in the published report.
  2. Indirect causation: The risk factor identified may act as a correlate of a true cause, there being in fact no independent relation between the risk factor and the disease. For example, carrying a box of matches is probably a risk factor for lung cancer, but only because most people who carry matches are cigarette smokers. The effects of poverty or deprivation on disease are also an example of indirect causation – having no money is not a health hazard in itself, but the consequences of having no money, such as homelessness or starvation, clearly are.
  3. Reverse causation: An exposure may be associated with disease because the disease causes the exposure, rather than vice-versa. For example, the association between asthma mortality and use of the asthma drug fenoterol reported in New Zealand could have arisen because asthmatic patients at high risk of death were more likely to be prescribed fenoterol, rather than any harmful effect of the drug in patients with asthma.
  4. Confounding: The association between risk factor and disease may arise or be otherwise distorted because of a relation between the risk factor and a confounding variable, which is itself related to disease occurrence. We have looked at confounding in section 5 – you will recall that confounding can create, accentuate, reduce or reverse a relationship between an exposure and disease.

It is often difficult to distinguish these various influences in observational studies, and often the case that even after the most critical evaluation of available data; it is not possible to tell whether an identified risk is actually a cause of disease. If the results of independent studies all come with a similar answer, then that consistency would support the idea that a casual association exists (see Bradford Hill criterial later). However, what often happens in practice is that different studies produce broadly similar but individually different results, and particularly for relationships that are relatively weak, not all of which are statistically significant. In these circumstances it is appropriate to try to combine the results of different studies, and sum their combined statistical power (meta-analysis) to draw evidence from multiple sources into a single combined summary statistic.

However, the most consistent evidence of association between an exposure and a disease does not exclude indirect causation, reverse causation, confounding or other bias. To resolve this, it is necessary to carry out intervention studies to determine whether changing exposure changes the risk of disease. Since it is unethical to deliberately expose people to something that is thought to be harmful (for example a randomised trial to see if smoking causes lung cancer), this usually involves reduction or abatement of the exposure, to see if this reduces disease risk. To identify whether a particular exposure causes a disease, some inference of the likely causation is needed from available evidence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

In 1965 Bradford-Hill outlined nine basic criteria for determining causation that are still widely used - list these criteria.

A
  1. Strength: Strong associations are more likely to represent true cause than weak ones. It is also probably true that weak associations are more likely to have arisen from error or bias than strong ones.
  2. Consistency: A risk factor for a disease that is constantly identifiable as such in different populations is more likely to be a true cause than one that is not. Although the presence of other necessary component causes is a prerequisite of consistency, it is nevertheless true that in terms of inference, one repetition of a finding in a different study is probably worth a thousand subgroup reassessments from a single study.
  3. Specificity: This criterion requires that a particular exposure should lead to a single disease, not to multiple disease. It would be interesting to know what Bradford Hill have made of the evidence now available on the health effects of smoking.
  4. Temporality: The cause must precede the onset of disease.
  5. Biological gradient: An exposure-response relationship makes it more likely that a particular exposure causes disease. In logical terms this seems likely, but there are many reasons why an exposure-response relationship may be difficult to demonstrate.
  6. Plausibility: The cause-effect relation should be plausible on biological grounds. This was not the case for many years for smoking and lung cancer. Plausibility helps, therefore, but is not necessary and may even be misleading.
  7. Coherence: The observed association should fit in with what is already known about the disease. This is a very similar criterion to plausibility.
  8. Experimental evidence: Demonstration that intervention in the exposure influences disease outcome is clearly strong evidence of causation. Need intervention study - RCT but unethical to conduct if exposure is a suspected cause of disease. Hence have to intervene to reduce exposure - does this reduce disease?
  9. Analogy: If similar cause effect relationships exist, a new example is perhaps more credible. Finding analogies is perhaps more of a challenge than a help to those involved in trying to identify causes.

These criteria are now perhaps subject to rather more qualification by both experienced and principle than when they were first proposed, but they are still widely quoted and used, (particularly in medico-legal work) and provide some support and insight into the likely validity of associations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What do we mean by a ‘cause’ of a disease?

A

A cause of disease is an exposure that causes disease.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do epidemiological studies try and identify cause of disease?

A

Epidemiological studies try and identify causal factors by comparing diseased with health people.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In order for an epidemiological study to be able to identify the cause of disease what key factor is required in the population?

A

In order for an epidemiological study to have any chance of determining the cause of disease we need heterogeneity in the population studied.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

True or False - few diseases arise from a single cause.

A

True.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

True or False - some exposures ‘cause’ multiple diseases.

A

True.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is meant by the term ‘risk factor’?

A

Risk factor = exposure that increases the risk of disease.

Individuals exposed have a higher risk of disease.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

For what reasons may a risk factor not necessarily be a cause of disease?

A
  1. Chance
  2. Indirect Causation
  3. Reverse Causation
  4. Confounding
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Describe how chance may mean that a risk factor is not necessarily the cause of disease.

A

With a P-value of 0.05 5% of the time the conclusions will be false positive results. This can cause an issue with multiple testing / data dredging. It is fine if the authors are honest about the comparisons that have been done but it is not so obvious in some papers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Describe how indirect causation may mean that a risk factor is not necessarily the cause of disease.

A

For example:

Risk factor A may be related to risk factor B, and it is actually risk factor B that is causing disease.

e.g. poverty may appear to be linked to malnutrition, but the actual causative factor for malnutrition is starvation which can be linked to poverty.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe how confounding factors may mean that a risk factor is not necessarily the cause of disease.

A

Confounding factors are independently associated with both the exposure you are looking at and the disease.

e.g. we may see an association between coffee drinking and lung cancer, but this is actually because coffee drinking is independently related to smoking.

You can test for confounding by using stratified analysis - e.g. looking at our association between coffee drinking and lung cancer in our smokers and non-smokers separately and seeing if that relationship was maintained.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe how reverse causation may mean that a risk factor is not necessarily the cause of disease.

A

The disease may cause risk factor A rather than the risk factor causing the disease.

17
Q

How do we determine cause from epidemiological studies when there are so many factors to take into consideration?

A
  • Bradford-Hill - developed 9 criteria in 1965

- Beware though - even the most consistent evidence doesn’t exclude indirect causation, reverse causation or confounding

18
Q

What is usually accepted as a strong association in a study?

A

An odds ratio or a risk ratio of 2 or more. However, this is not a magic number and we should not blindly trust this. It is important not to discount small effects. Well designed and implemented studies can detect small effect sizes.

19
Q

Is specificity a perfect indicator of causality?

A

No. For example smoking causes more than just lung cancer, Aspirin is effective for a wide range of diseases. Don’t discount non-specific effects, but be cautious.

20
Q

Summarise causation.

A

It is difficult to assess causation in epidemiological studies - we can identify risk factors and to a certain extent use Bradford-Hill criteria to assess if causation is likely. Really what we need is a prospective study to ‘prove’ causation.

21
Q

What is the purpose of systematic reviews?

A

One of the difficulties of determining cause and effect, and indeed other associations in epidemiology and clinical medicine, is that of synthesising evidence from diverse sources and from studies of different design and sample size.

For example, during the 1980’s reports began to appear in the literature suggesting that passive exposure to cigarette smoke may increase the risk of lung cancer in non-smokers. Most of the early studies were carried out in small samples of people, and although they found odds ratios that were increased, few were statistically significant.

Some of the early studies carried out in European populations seemed to indicate that there was some real increase in the risk of lung cancer amongst non-smokers married to a smoker, but also that the estimated magnitude of effects varies greatly, and that almost all the studies were not statistically significant. How can the information from diverse sources be combined?

The aim of systematic reviews and meta analysis is to give a complete and balanced overview of the available evidence concerning one particular question. The question may relate to causation but could also relate to the efficacy of a treatment in different clinical trials, or many other topics. In fact the statistical methods used in meta-analysis are relatively simple; the hard part of the process is making sure that all of the relevant data has been collected. This is the function of a systematic review.

22
Q

What is a systematic review?

A

The definition of a systematic review given by Cook et al is “the application of scientific strategies that limit bias to the systematic assembly, critical appraisal, and synthesis of all relevant studies on a specific topic.” This paper also presents guidelines for performing systematic reviews and meta-analysis.

The underlying principle is that rigid methodology is used to ensure that the question being asked and the study to be reviewed is defined carefully, and that a thorough review of all the literature is carried out. The Cochrane Collaboration is a body that aims to produce systematic reviews of a number of clinical questions and update their reviews regularly. Already the Cochrane library contains hundreds of systematic reviews.

23
Q

Describe the steps that must be carried out in a systematic review.

A

How each step of the systematic review is to be carried out needs to be documented in a protocol. These steps are:

  1. Framing the question - for example, does exposure to cigarette smoke increase the risk of lung cancer in non-smokers?
  2. Identifying the relevant literature - unlike the literature review where we just look at a selection of papers, for a systematic review we have to look at all of the available evidence that has tried to answer the specific question. This requires a very detailed search strategy and it is very time consuming. It usually generates thousands of pieces of literature which then have to be sorted to see if they are relevant to our question of whether cigarette smoke exposure increases the risk of lung cancer in non-smokers.
  3. Assessing the methodological quality of the included studies - This is an important aspect of any systematic review. Some studies are poorly conducted and/or reported, and therefore it is questionable as to whether these studies should inform practice. The method of assessment you can use to judge the quality of an individual study depends on the type of study you are assessing. What you are trying to assess is whether the results are credible or valid. You can do this by identifying the strengths and weaknesses of the study, and determining whether the study is at a high risk of bias.
  4. Summarising the evidence and interpreting findings - Once we have identified and obtained all of the relevant literature on our research question, we can then summarise all of the information. This can be done either qualitatively using a descriptive format, for example, ‘when looking at the included studies, it does not appear that exposure to cigarette smoke consistently increases the risk of lung cancer in non-smokers’; or quantitatively, using a statistical method called meta-analysis.
24
Q

What is meta-analysis?

A

A meta-analysis can be used to calculate an ‘average’ measure of an effect by assembling the quantitative results from several studies together. Each study in the meta-analysis has a measure of effect, such as an odds ratio, which tells the researcher whether there was, on average, an increase or decrease in the risk of getting lung cancer after being exposed to cigarette smoke. The main aim of a meta-analysis is to improve the precision of these measures of effect by statistically combining the data from multiple studies and creating a new, single measure of effect, called the pooled result or summary statistic. Meta-analyses cannot be carried out using SPSS. Instead, STATA carries out fixed and random effects analyses, tests for heterogeneity and produces graphs of the individual and pooled estimates.

A meta-analysis is a formal way of synthesising the result of multiply studies and uses statistical techniques to pool these results. One definition of a meta-analysis is “a systematic review that employs statistical methods to combine and summarise the results of several studies.”

25
Q

Give one definition of a meta-analysis.

A

a systematic review that employs statistical methods to combine and summarise the results of several studies

26
Q

How would one go about carrying out a meta-analysis?

A

Having decided which studies to include and assessed publication bias, the following steps are carried out:

  1. Draw a forest plot of the summary results (odds ratios, rate ratios, difference in means etc) for each study together with their confidence intervals.
  2. Assess between-study heterogeneity. Once we have plotted the data visually using forest plot, we need to determine whether the results from each of the studies are similar. The degree to which the studies varies is called heterogeneity. We can only perform a meta-analysis when heterogeneity is not a problem. The simplest way to assess the degree of heterogeneity is by visually looking at the forest plot and seeing whether the results from each study are similar. Heterogeneity is not usually a problem if the confidence intervals overlap with each other.

We can also use a statistical test to assess the level of heterogeneity between the results of the studies, called I2. The value for I2 can range from 0% to 100%. A value of 0% would indicate that there is no heterogeneity; a value of 50% would indicate that 50% of the total variation in the meta-analysis is due to heterogeneity, and a value of 100% would indicate that all of the variation in the meta-analysis is due to heterogeneity. As a basic rule of thumb, one should not conduct a meta analysis if the I2 value is between 85% and 100%, as this means the studies are too different and not comparable.

  1. Pool the data in the meta-analysis - We are at the point now where we have identified all the studies, extracted all the data, plotted the results, and assessed the heterogeneity. Now we come to pooling the studies together to get the new single measure of effect (summary statistic). There are two main methods used to calculate the summary statistic and for the purpose of explanation let’s assume this is an odds ratio.

The first method is called the fixed effect method. This method calculates a weighted average of the odds ratio from all of the different studies - the weight being proportional to the size of the study. Therefore the larger the study the more influence it will have on the pooled odds ratio. The calculations are actually carried out by calculating a weighted average of the log odds ratio using the inverse of the variance of each log odds ratio as the weight. The important point is that the fixed effect method assumes that all of the available studies were trying to estimate a true value that is the same for all of them - i.e. does not vary according to where or when or in whom the study is done. The fixed effect model is probably appropriate when visually the estimates vary but their confidence intervals all more or less overlap, so it is likely that they are all estimating the same thing. This may be assessed using the I2 statistic; where a fixed effect model is appropriate when I2 is between 0% and 30-40%.

However, if moderate levels of heterogeneity are detected by I2, i.e. a value of between 40% and 85%, then the more appropriate method would be the random effect method. This method accounts for some of the heterogeneity between studies. This is the more appropriate method when there are a priori grounds to suspect that true pooled effect would differ between studies. Reasons would include different types of drug therapy, or different levels of exposure, being considered. Briefly, this is done by estimating the between study variance and using this to modify the weighting of each study.

27
Q

What is publication bias?

A

Perhaps the most important bias which may be found with systematic reviews arises from the fact that ‘interesting’ (usually positive) studies are more likely to be published than ‘less interesting’ (usually negative) ones. This is called publication bias. One way of looking for this is to inspect the magnitude of published effects in relation to the order of their publication - if publication bias is present, then the earlier reports will tend to find larger effects than later studies. The best way to try and minimise publication bias is to undertake a careful review of all the published literature (in all languages) and this may include additional hand searching through references quoted in each paper. In some cases researchers may also consider asking for unpublished data from other researchers, although it may be unclear whether to include this in the systematic review or not - including it may reduce publication bias but may decrease validity as the work has not yet been peer reviewed. One proposed solution has been a registry of all ongoing clinical trials including an amnesty for unpublished trials, and this has been greeted enthusiastically by the editors of the BMJ and Lancet. Soon major observational studies may follow suit.

28
Q

Describe evaluation of the quality of studies.

A

The design and evaluation of research studies can be extremely difficult, and no research study is ever likely to be perfect. However, anyone involved in carrying out research, or implementing the results, needs to be able to identify major shortcomings in studies and to reach pragmatic conclusions that take proper account of these problems. In assessing published work it is also worth bearing in mind that the papers published in it are free from major problems. It tends be the journals that are struggling for copy that publish papers with major faults.

The following outline is intended to provide a framework for the evaluation of research, but is also one that will apply in the design stages of research studies.

  1. Is the research question clear? - in any study there should be a clear reason for doing the work, which is outlined in the introduction of the paper, and a clear question formulated to address this reason. If a study is not clear about its objectives, then the relevance of the results may also be unclear.

For practical reasons, the question formulated in a study is often slightly different to the question that originally drove the research, and it is important to be sure that the question actually asked in the study is sufficiently close that it is still relevant. It is also appropriate to ensure that the study then addressed this question, and not some subsidiary issue. In general:

  • studies should generally have one clearly defined, relevant primary question.
  • secondary outcomes, subgroup analyses etc can follow from this but the first focus of the analysis and presentation should always be the primary hypothesis test contained in the main question asked by the study.
  1. What study design and methods have been used? - It is important to be clear what design and methods are being used, and where that design lies in the general ranking of levels of evidence in epidemiology:
  • Clinical observation
  • Vital/routine statistics
  • Cross-sectional surveys/Case-control studies
  • Cohort studies
  • Randomised controlled trials

In general, randomised controlled trials provide a very high level of evidence but often in highly selected (and therefore potentially unrepresentative) individuals; observational studies are all susceptible to confounding.

  1. Are the case definitions and/or outcomes clearly defined? - It has to be clear that disease and exposure status, and all other relevant measures used in the study, are clearly defined and free from bias. Make sure that you can understand what the study population is - either in absolute terms (as in cross-sectional or cohort study) or conceptual terms (case-control). Check that the working definition of disease used in the study is clear, and is reasonably valid to the disease in question. Make sure that the methods used to measure exposure are clearly described.
  2. Control definition and selection - Particularly in case-control studies, look at the control population to ensure that it is an appropriate reference group.
  • if a case had arisen from the control group, would that case have been included as a case in the study?
  • make sure that the same disease definition has been applied in all controls.
  1. All studies involve some kind of sampling. Make sure that you know what the target population of the study was, and if findings are to be extrapolated to a broader population, that the sampling method can reasonably be expected to have generated a representative sample of that population. If not, then check that the results of the study are not used to make unjustified references about the target population.
  2. Bias and confounding - Check that outcomes and exposure are measured in the same way for all study subjects, and look for potential sources of bias in these measurements. If this is not possible, think about the likely consequences in terms of bias. Check that potential confounders have been identified and measured, and appropriately dealt with. Look for evidence on the validity and repeatability of the methods used. Look at the source of funding for the study, and any declared conflicts of interest by the authors.
  3. Sample size and power - make sure that the paper justifies the sample size used, and ask whether it is likely to be adequate.
  4. Data processing and analysis - look at the statistical methods to see if they are appropriate. Check that sampling, matching and other design characteristics are appropriately dealt with in the analysis. Check that the analysis addressed the primary hypothesis before exploring secondary outcomes or subgroups. Look for evidence of overt or covert multiple hypothesis testing.