1A Epidemiology Flashcards

1
Q

Right censoring

A

When people leave an at risk population before and event of interest has occured. Ie a cohort study where someone dies or is lost to follow up before the end of the study period but if they remained they MAY have developed the disease

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Period prevelance

A

Proportion of population with an illness during a specified period of time

As a proportion it is a number between 0–>1. If you multiple by 100 you get a percentage

Frequently quoted in epidemiology as a number per population for the time period

(remember Incidence is a measure of the number of NEW cases of a characteristic that develop in a population in a specified time period; whereas prevalence is the proportion of a population who have a specific characteristic in a given time period, regardless of when they first developed the characteristic)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Cumulative incidence

A

Number of occurrences of interest over a time period (ie a year) NEW CASES

Note population is disease free a the start!

ie 4 cases of malaria in 1000 people over 1 year = 4/1000 = 0.4% over 1 year

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Incidence rate (AKA incidence density)

A

Kind of more detailed than cumulative incidence. Takes into account when a person became ill or died/lost to follow up. Therefore it takes into account each persons time at risk for developing an illness. If they develop an illness, die or are lost to follow up they are no longer ‘at risk’. Reported as rate per X person-years

Note as in person years it IS NOT A PROPORTION so the formula for confidence intervals is different

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Ratio data

A

Quantitative data with a true 0 and numbers can be compared by ratios ie height and weight, 0cm is truley and absent of height and 1m is double 50cm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Ordinal data

A

Data with an order but the interval between options is not always consistant ie likert scale or social class I,II,III,IV,V

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Interval data

A

Quantitative data with a consistent interval between categories but no true 0 and the ratios between the data are meaningless (ie degrees celcius- 0 degrees does not mean there is no temperature (you can have minus numbers!) And 20 degrees is not twice as hot as 10 degrees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Nominal data

A

Categorical data with no order ie blood group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Dimensions of descriptive epidemiology

A

Time, person, place

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Total period fertility rate

A

Sum of the age specific fertility rates. Indicates the average number of babies that would be born to a women during her lifetime if she had avergae fertility and survived to the end of her reproductive life.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Age specific fertility rate

A

(Number of births to women aged x/1000 women aged x) per year

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

General fertility rate

A

(Number of livebirths/ 1000 women aged 15-44years) per year

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Crude birth rate

A

(Number of live births/1000 population) per year

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Perinatal mortality rate

A

(Number of still births or deaths <7days/1000 births) per year

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Post-neonatal mortality rate

A

(Number of deaths 4-52 weeks/1000 live births) per year

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Neonatal mortality rate

A

(Number of deaths <28 days/1000 livebirths) per year

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Infant mortality rate

A

(Number of deaths <1years/1000 live births) per year

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Child mortaliry rate

A

(Number of deaths of children <5years/ 1000 <5years) per year

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Age specific mortality rate

A

(Number of deaths for age x/1000 population aged x) per year

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Crude mortality rate

A

(Number of deaths/ 1000 population) per year

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Standardisation

A

Process by which data is transformed to allow comparison between populations with different demographics (ie different age structures)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Direct standardisation

A

Requires stratum specific data (ie age specific mortality rate). These rates are then applied to a standard population (ie the European standard population) (ie how many people in town A would have died if there were 60000 40-45 years olds not 40000). The ‘expected deaths’ for each age group are totalled and divided by the total number of people in the standard population. This is the age-standardised mortality rate. This has little meaning on its own but can be compared with another population standardised using the same reference population.

Needs data from large numbers to have accurate information for all strata

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Comparative mortality ratio

A

age-standardised mortality rate town A/ age standardised mortality rate town B.

if ratio is for example 1.114 then after standardisation mortality is 11.4% higher in town A.

calculated for direct standardisation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

weighted average

A

A weighted average is a method of computing an average where some data points contribute more than others. If all the weights of the data point are equal then the weighted average is the same as the mean.

ie when combing module assessment marks coursework is worth 60% and exam 40%. Your overall mark is a weighted average.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

indirect standardisation
(when to use (3), what is the output and how to do)

A

Used when:
- strata specific rates are unknown
- study population is of a small size
- stratum specific rates are 0

Gives the STANDARDISED MORTALITY RATIO

Involves taking the rates for a standard population and calculating what would occur in the population of interest should the rates be the same (ie how many deaths would you expect in the population you are looking at). The expected rate can then be compared with the actual rate, using the standardised mortality rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

standardised mortality ratio (SMR) AKA standardised Incidence ratio

A

Calculated when using indirect standardisation.

ratio of 2 counts

(observed deaths / expected deaths) x100

SMRs should be compared with caution as social class, ethnicity/sex composition will all have an impact. Furthermore the different age distributions of the populations make comparisons likely not valid.

If two SMRs are considered they would not be comparable even if
they use the same mortality distribution in the reference populations
* the reason is that the two expected values for expected deaths are constructed
using the different age distributions of the two study populations
* hence they are not referring to the same denominator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Standardised mortality ratio and occupation studies

A

IN occupational exposures SMR often underestimates the strength of an association as the general population contains both exposed and unexposed.

When doing occupational studies comparisons are made against 2 groups:
- an unexposed population from the same occupation
-general population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

years of life lost

A

measure of premature mortality
death of young contributes more than death of older people. Can be calculated in 2 ways:
simple: upper age is chased (ie 75years) and any deaths before that are counted and number of years lost summed.
Complex: life expectancy for each individual are calculated using life tables and age specific mortality rates.

Underestimates the burden of chronic disease as young people live with this for a long time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

HALE

A

health adjusted life expectancy.

Can be used instead of years of life lost, better for estimating the impact of chronic diseases where years of life lost may underestimate the disease burden (as it doesn’t account for feeling crap just dead)

HALE is a measure of population health that takes into account mortality and morbidity. It adjusts overall life expectancy by the amount of time lived in less than perfect health. This is calculated by subtracting from the life expectancy a figure which is the number of years lived with disability multiplied by a weighting to represent the effect of the disability.

If:

A = years lived healthily
B = years lived with disability

A+B = life expectancy

A+fB = healthy life expectancy, where f is a weighting to reflect disability level.

N.B. This raises many moral questions about who defines and measures disability level and how they do it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

event based measures of disease burden

A

Rely on routinely collected data on INCIDENCE

ie death certs, hospi admission data, disease registers, statutory notifications.

Any health service data will likely underestimate disease burden due to large proportion of self care.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Time based measures of disease burden

A

Where no routinely collected incidence data is available as cross sectional PREVELANCE survey may be used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

causes of variation in epidemiological studies

A

Chance (random error)
Bias
Confounding
True causal association
Reverse causation

Only after all other causes of variation have been considered should the possibility of a causal relationship be considered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Bias

A

A systematic error that leads to a difference between the comparison groups with regard to how they are selected, treated, measured or interpreted.

Unlike confounding the role of bias cannot be measured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Confounding

A

Where an apparent association between exposure and outcome is in fact due to a third factor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Reverse causation

A

when the outcome of interest leads to variation in the exposure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Sampling error

A

Sampling error is chance variation (as long as the study is unbiased) between the values obtained for the study sample and the values which would be obtained if measuring the whole population.

It is reflected in the standard error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Standard error

A

Reveals how accurately a sample represents the whole population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

random measurement error

A

effect both the exposed and non exposed groups

findings tend towards the null hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

systematic error

A

leads to bias
boas can occur in either direction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

ways to deal with measurement bias

A

Measure reliability using correlation coefficients (cont. data) or cohens kappa (categorical data)

blind accurately

Use validated measuring tools and protocols

use a range of measures (direct measurements, questionnaires etc)

Conduct a sensitivity analysis

report potential errors both random and systematic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

sensitivity analysis

A

Sensitivity analyses are used to determine the extent to which the results of a trial are affected by changes in method, models, values of unmeasured variables or assumptions

ie could analyse results with/without outliers, intention to treat or as treated

if changing things does not impact the results the results are likely more robust

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

risk

A

Same as cumulative incidence

Measures can be absolute or relative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Attributable risk

A

On formula sheet.

The difference between the rate of disease in the exposed and unexposed

AKA risk difference or excess risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Attributable fraction

A

AKA the aetiological fraction

This is the proportion of the disease in the exposed which can be considered to be due to the exposure, after accounting for risk of disease that would have occurred anyway.

it is a measure that combines the risk difference and the prevelance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Population attributable fraction

A

The PROPORTION of the incidence of a disease in the population (exposed and nonexposed) that is due to exposure.

It is the proportion of a disease in the population that would be eliminated if exposure were eliminated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Population attributable risk

A

The excess RATE of disease in the whole population which is attributable to the exposure.

ie smoking and lung cancer mortality
mort. ion whole pop= 55 per 100 000
mort in non-smokers= 16 per 100 000
PAR= rate in pop.- rate in exposed= 39 deaths/100 000 per year

AKA preventable fraction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Effect measures

A

Effects measures are RELATIVE RISKS.

They do not give any idea about the absolute risk of an event but rather are a ratio of the probability of disease between the exposed and unexposed.

Give an indication of ‘strength of association’

They include risk ratio (same as relative risk) odds ratio and rate ratio)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

risk ratio aka relative risk

A

risk of the disease in the exposed / risk of the disease in the unexposed

(a/a+b) / (c/c+d)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

rate ratio

A

rate of the disease in the exposed/ rate of the disease in the unexposed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Odds ratio

A

used in case control studies where you have selected based on presence of disease so you cannot calculate risks.

You can calculate odds of diseased/not diseased having the exposure.

If the disease is rate OR approximates to RR.

odds of the disease in the exposed/odds of disease in the unexposed

(a/b)/(c/d)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Bradford hill Criteria (9)

A
  1. Strength of association
  2. Temporal relationship
  3. Reversibility
  4. Biological plausibility
  5. Coherence (like biologic plausibility, the relationship should not conflict with the natural history of the disease)
  6. Specificity (exposure only causes one disease)
  7. Analogy (analogies to other cause and effect relationships)
  8. Consistency of findings
  9. Dose-gradient
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Studies Try Revealing Breakthroughs, check scientists are credible dorks

A

Strength of association
Temporal relationship
Reversibility
Biological plausibility
Coherence
Specificity
Analogy
Consistency of findings
Dose response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

2 broad types of bias

A

selection bias
Measurement bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What is selection bias?

A

When there is a systematic difference between:
-study participants and non-participants
-those in one study group (ie intervention) and those in another group (ie control)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

4 types of selection bias

A

-Healthy worker bias
-volunteer bias
- follow up bias
- control bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Healthy worker bias

A

Problem in occupational cohort studies. People working tend to be healthier than the general population. For this reason cohort studies may use workers from the same workplace but different job role as controls

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

volunteer bias

A

people who volunteer for studies tend to be healthier and more compliant than the general population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Control bias

A

Particularly a problem in case-control studies. In these convenience sampling is often used, it cases are obtained from a hospital clinic list and controls from a different hospital clinic list. This may mean neither cases or controls are truly representative of the gen. population. This is improved by using nested case control studies. The case control study is nested in a cohort study. Exposure data is collected at baseline, if a case develops then controls are selected from the cohort. Data is only analysed from the cases and controls rather than the whole cohort.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Follow up bias

A

When those lost to follow up differ systematically from those who remain in the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Measurement bias

A

When there are errors in the way outcomes or exposures are measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Non differential (random) measurement bias

A

The error in assessing exposure/outcome occurs equally in both study and control groups. The misclassifcation is not related to outcome or exposure.

Serves to makes the groups appear more similar than reality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

3 main types of measurement bias

A

-Instrument bias
-Respondent bias
-Observer bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Differential (systematic) measurement bias

A

classification error occurs differently depending on a persons outcome or exposure status.

Can serve to reduce or exaggerate a association between exposure and outcome.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Instrument bias

A

inaccuracies in equipment or test used to measure outcome/exposure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Responder bias (3 examples and ways to minimise)

A

Occurs when:
- exposure information given by respondents differs depending on their outcome
- outcome information given by respondents differs depending on their exposure

ie RECALL BIAS- a a particular problem in case-control studies

PLACEBO EFFECT- if an intervention has been received participants with report outcomes more favourably

can be minimised by- blinding, giving placebos, collecting exposure info from historical health records, using objective outcome/exposure measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Observer bias (1 example and how to minimise)

A

systematic differences in the way exposure/outcome data is recorded between study groups

ie INTERVIEWER BIAS- interviewer may ask different questions if they know they have has an intervention.

Minimise through- blinding, standardised data collection protocol

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

General measures to reduce bias (10) mneumonic

A

BIRTH CREW D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

BIRTH CREW D

A

general measures to reduce bias

B-Blinding
I-Irrelevant factors- collect irrelevant factors to measure bias and blind hypothesis under investigation
R- repeated measurements to reduce instrument bias
Training
High risk cohort- select cohorts at high risk of disease to reduce follow up time and therefore follow up losses

C-choice of controls, use hospitalised controls o increase comparability (they will have a similar level of recall of events prior to admission)
Randomisation
E-ease of follow up- chose cohorts who are easy to follow up
W- Written protocol

D- Duplicate measures- get information of exposure/outcome from multiple sources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

3 measures to reduce bias in questionaires

A
  1. check for known associations
  2. seek information in different ways
  3. check characteristics of data collection (ie time to complete survey)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

4 ways to measure bias in intervention studies

A
  1. self report
  2. pill counts
  3. measuring biochemical parameters
  4. incorporating safe biochemical marker in placebo that can be measured in urine
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Mediating factors

A

confounding factors have to be independantly associated with both exposure and outcome.

Mediating factors are a step along the causal chain.

ie poor diet and CHD are associated. High cholesterol is a mediating factor

Poor diet –> high chol –> CHD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

positive confounding

A

makes an association more pronouced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

negative confounding

A

makes an association less pronouced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Residual confounding

A

When unknown confounding factors have not been accounted for or when confounders have been inaccurately measured.

Essentially eliminated by randomisation as randomly distributed between the 2 groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

effect modification

A

An effect modifier is a third variable with effects the strength of association between the exposure and the outcome.

ie smoking and asbestos exposure –> lung cancer. Smokers with chronic asbestos exposure have far greater risk of lung cancer than the 2 risk added together, asbestos exposure is an effect modifier.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

Assessing effect modification

A

Stratified analysis provides a way to identify effect modification (ie look at strength of association for each level of the effect modifier)

A chi squared test for interaction can be used to assess whether the difference between strata specific estimates are likely due to effect modification or chance, however the test has low power so estimates should also be checked visually.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

2 stages of a study that confounding can be addressed

A

Design and analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

3 strategies for dealing with confounding at the design stage

A

Randomisation- deals with both known and unknown confounders if large enough sample but not always possible
Restriction- ie if sex and race known to confound just use black women. Cheap but restricts pool or participants, residual confounders remain if restriction is insufficiently narrow
Matching-Really only used in case-control studies as is difficult and expensive. Cannot assess impact of factors that have been matched. No control over factors which have not been matched.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

3 strategies for dealing with confounding at analysis stage

A
  1. stratification- divide study population into groups according to the confounder so that within groups the confounder cannot confound as it does not vary. After stratification the mantel-haenszel estimator can be employed to provide an adjusted result according to strata and a combined weighted average. If it differs from the crude estimate of effect strength confounding is at play. Can only deal with a small number of confounders as the number of strata increases exponentially and therefore the number in each group decreases.

Standardisation - ie direct and indirect

Multivariate analysis - ie multiple regression or logistic regression. Can deal with multiple confounders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

2 examples of descriptive studies

A

case reports and case series

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

strengths (3) and weaknesses (4) of descriptive studies

A

Strengths: Cheap, rapid, can support hypothesis generation
Weaknesses: No control group, cannot test for valid statistical association, may not be generalisable, cannot assess for disease burden

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Ecological studies what are they and the 2 main types

A

characterised by the unit of observation being a group.

Describe a pattern of disease for an entire population with regards to another parameter.

Measures correlation coefficient

2 main types:
geographical studies
time series study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

Strengths (4) and weaknesses of ecological studies (6)

A

Strengths: Rapid, cheap, can use routine collected data, support hypothesis generation

Weaknesses: Ecological fallacy, no individual level data, spatial autocorrelation (2 places close together are likely to be more similar than 2 places far apart- analysis assumes places are independent but they may not be), leakage of exposures through migration, assesses average exposure (would not be able to detect a J shaped curve), unable to control for unknown confounders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

Ecological fallacy

A

When inferences about individuals are drawn from population level data from ecological studies.

ie a study showed that USA states with high levels of immigrants had higher literacy levels. People deduced that migrants had high levels of literacy. Actually migrants were more likely to move to states with high literacy levels but their literacy levels were low.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

Cross sectional studies (design, sampling, application, analysis, strengths, weaknesses)

A

Information on exposure and outcome are collected at a single timepoint.

Can be descriptive (ie data collected on exposure or outcome), analytical (assess association between exposure and outcome) or ecological (no individual level data

DESIGN- data collected on exposure and outcome at a signle time point
SAMPLING- needs to be representative of population under study, should be random and sufficiently large
APPLICATION- hypothesis formulation or hypothesis testing if analytic
ANALYSIS: disease frequency: prevalence or odds
measure of effect: OR, prevalence ratio or prevalence difference
STRENGTHS- rapid and cheap, useful for rare diseases, can study multiple exposures and outcomes, useful for assessing disease burden
WEAKNESSES- as only assesses prevalence not incidence cannot distinguish between determinants of aeitiology and survival, hard to establish temporality (risk of reverse causation), ris, of recall bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

case-control (design, sampling, application, analysis, strengths, weaknesses)

A

Design: cases identified and matches with controls. Ideally cases should be 1:1 with controls but can be 1:4 if cases are limited. More controls than this add little to study power.
Sampling: can be population based or hospital based but as hospital population is not always representative of the general population, population based is better
Application: can be used to test hypothesis. Can be retrospective (all cases are identified before study starts) or prospective (new cases are identified during the study period).
Analysis: calculate OR. Cannot calculate disease prevalence
Strengths: cheap, can be rapid, good for rare diseases, useful for diseases with long latent periods, can examine many exposures simultaneously
Weaknesses: selection bias (control bias) since exposure and disease have already occurred, recall bias, temporal relationships may be difficult to establish, poor for rare exposures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

nested case control study

A

Nested within a cohort study. Cases and controls are selected from the cohort and the data collected utilised.

Limits selection (control) bias as cases and controls are drawn from the cohort. Cost effective and can avoid recall bias by using previously obtained information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Cohort study (Design, sampling, application, analysis, strengths, weaknesess)

A

Design: participants identified based on exposure. Can be retrospective (exposure and outcome assessed from case notes) or prospective (normal)

Sampling: Population based sampling generally better especially if common exposure. If exposure rare cohort may be chosen from specific group (ie builders for asbestos exposure) but note if using workplace the risk of the healthy worker effect)

Application: able to measure incidence in both groups

Analysis: Relative- risk/rate/odds ratio
Absolute: risk/ rate/ odds difference. Most assess group similarity to assess for confounding. Can assess lost to follow up by considering 2 extreme scenarios- all those lost develop disease or all those lost do not develop disease.

Strengths: Can establish temporal relationship, good for rare exposures, can look for multiple outcomes from one exposure, minimises selection bias, retrospective cohort studies are useful for diseases with long latent periods

weaknesses: Expensive, time consuming, healthy worker effect, bad for rare diseases, risk of loss to follow up, records may be incomplete for retrospective cohort studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

intervention studies (design, sampling, application, analysis, strengths, weaknesses)

A

Design: investigator determines which participants receive an exposure. Can be affected by non-compliance (can be improved by having a run in period pre randomisation to assess and improve acceptability of treatment/placebo), if non -compliance is an issue results will tend towards the null hypothesis.

SAMPLING: Sample needs to represent the reference population

APPLICATION: can investigate therapeutic or preventative interventions at individual or group level

STRENGTHS: Can provide high quality evidence, if sample large enough validity largely guaranteed, blinding can minimise observation bias, randomisation can eliminate residual confounding

WEAKNESSES: expensive, ethics (need to have clinical equipoise), does not test treatments in real world scenario, can be difficult to generalise to general population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

Crossover RCT

A

each participant acts as their own control, they receive 2 or more treatments during the study period

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

Factorial RCT

A

Compares 2 or more interventions alone and in combination (ie drug A, drug B, drug A+B or placebo). Needs alot of participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

Cluster RCT

A

when groups are randomized not individuals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

Challenges with small area analysis

A
  1. there may be little variation in exposure between areas making analytical studies difficult
  2. chance/incorrect data may have a greater effect on results
  3. there may be a lack of data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

why do small area analysis and an example

A

Some diseases may be significantly high in some small areas and this will be lost in larger area averages. Having high quality local data can therefore be beneficial.

Dartmouth atlas of healthcare looks at medical supply and utilisation across areas of the US and examines variation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

definition: Validity

A

How well an instrument measures what it intents to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

Ways to assess/describe validity (4)

A
  1. criterion validity
  2. Face validity
  3. Content validity
  4. Construct validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

What is criterion validity

A

there are 2 types of criterion validity
1. Concurrent validity- how well an instrument compares to a gold standard
2. Predictive validity: how well an instrument predicts what it aims to ie risk of developing disease

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

What is face validity

A

How well and instrument compares to expert opinion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

What is content validity

A

How appropriate an instruments content and composition is given what it is trying to measure (ie does a questionnaire on depression cover all the important symptoms)

99
Q

What is construct validity

A

How well does an instrument measure what it intends to measure (ie does a questionnaire on leadership measure someone is a good leader or does it actually measure whether they are a good manager

100
Q

Ways to improve validity (3)

A
  1. Assess instruments against gold standard/ expert opinion
  2. Triangulation using multiple research methods
  3. Address measurement bias (ie standard operating procedures and blinding)
101
Q

Define reliability

A

AKA repeatability or reproducibility

An instrument is 100% reliable if it gives the same result every time if measuring the same thing

102
Q

4 types/ methods of assessment of reliability

A
  1. Intra-observer reliability
  2. Inter-observer reliability
  3. Equivalence
  4. Internal consistancy
103
Q

Intra-observer reliability (and how to measure)

A

Consistency of results when a single observer makes multiple measurements on the same subject

Can be measured using correlation coefficient (for continuous measures, >0.7 generally considered good) or kappa statistics (binary or nominal measures), assesses the degree of agreement above that which would be expected by chance.

104
Q

Inter-observer reliability

A

Degree of consistency between measures done by different people on the same subject.

Can be measured using the correlation coefficient (continuous measures, .0.7 considered good) or kappa statistic (assess the degree of agreement above which would be expected by chance, binary or nominal measures)

105
Q

Limitations of the kappa statistic

A
  • assumes observers are independent
  • gives no info on why there is lack of concordance
106
Q

Internal consistency (and how to measure)

A

This considers how well all element within an instrument measure what they intend to. For example if a questionnaire on self esteem you would expect those with low self esteem to score consistently low, however if one question actually measured loneliness people with low self esteem may score highly on this question.

Measured using chronbachs alpha

107
Q

Equivalence reliability and how to measure

A

Degree of agreement between two instruments which measure the same thing (ie a questionnaire in 2 different languages)

measured using the equivalence reliability coefficient

108
Q

external validity/ generalisability

A

How applicable a study’s findings are to other populations
This is a judgement call

109
Q

Internal validity

A

the extent to which the observed results represent the truth in the population we are studying and, thus, are not due to methodological errors

109
Q

Why use intention to treat analysis (3 reasons)

A
  1. gives more real world view on the result of offering people a treatment
  2. People who do not comply with treatment may differ systematically from those who do (ie be more or less unwell)
  3. Allowing switching of groups negates the benefit of randomisation to balancing unknown confounders
110
Q

How to deal with lost to follow up

A

Ideally in an intention to treat analysis those lost to follow up should be included in analysis.

This can be done by seeking their outcome or imputing outcomes.

Alternatively you can exclude from main analysis and then impute their outcomes for use in a sensitivity analysis (you would consider worse case scenario for intervention (all those lost to follow up in control group are free of disease and all those lost to follow up in intervention group have disease) and best case scenario for the intervention (all those lost to follow up in the control group have the disease and all those lost to follow up in the intervention group are free of disease)

111
Q

Strategies for imputing outcomes (4)

A
  1. Assume the best (all those lost to follow up are disease free)
  2. Assume the worst (all those lost to follow up have disease)
  3. Worst case scenario for intervention (all those lost to follow up in control group are free of disease and all those lost to follow up in intervention group have disease)
    4 Best case scenario for the intervention (all those lost to follow up in the control group have the disease and all those lost to follow up in the intervention group are free of disease)
112
Q

What is clustered data

A

Clustered data is data that are not fully independant of each other.

Ie if you take blood pressure on the same person twice those 2 data points are likely to be more similar than in you took blood pressure twice on 2 separate people.

Equally 2 people from the same GP surgery are likely to be more similar (background, SE status, ethnicity) than 2 people from different GP surgeries

The similarity between subjects within clusters reduces the variability of responses and thus reduces the statistical power to detect a difference between control and intervention groups.

113
Q

Types of clustered data

A
  1. multiple measurements on the same individual
  2. cluster RCTS
  3. cluster sampling surveys
114
Q

What is the design effect

A

this is the degree to which you need to increase a sample size to account for clustered data

115
Q

How is the design effect calculated

A

The design effect depends on the number of participants within each cluster and the INTRACLASS CORRELATION COEFFICIENT

116
Q

What is the intraclass correlation coefficient

A

This measures the similarity of clustered data. It compares similarity within clusters and between clusters. Determining it often requires a pilot study. This information will help determine the design effect.

The intraclass correlation coeffecient gives values of rho

rho=1 all the values within a cluster are the same, your sample size is effectively the same as the number of clusters

rho=0 there is not correlation of values within clusters, they are independent

117
Q

Different methods to analyse clustered data (3)

A
  1. calculate summary statistics for each cluster and then compare using standard statistical tests (however, you lose a lot of individual level data)
  2. calculate CLUSTER ROBUST STANDARD ERRORS (these area form of standard error that account for clustering and give wider confidence intervals and more conservative P values)
  3. Use fixed or random effects models within an ANOVA or multilevel regression analysis of clustered data
118
Q

Number needed to treat/ number needed to be exposed

A

The reciprocal of absolute risk reduction ( ie 1/ARR)

ARR is the proportion of events in the control group- proportion of events in the treatment group (ensure proportion is not expressed as a percentage ie express 3.3% as 0.033)

NNT is an intuitive way of expressing ARR. It should always be expressed with a time frame ie need to treat 30 people for 5 years to prevent 1 death.

NNT is dependent on the baseline risk of disease in the population so cannot be generalised to populations with a different baseline risk.

NNT is subjective, is a NNT of 30 good or bad?

119
Q

what is Time series design

A

A subtype of longitudinal research design. Usually ecological in nature (population level observation)

Observations are taken at several time points on the same variable

Generally generate hypothesis for further testing rather than demonstrate causality.

When observations are taken before and after an event they are called an interrupted time series design

120
Q

Time trend analysis and its uses

A

Uses comparisons between groups to draw conclusions on the effect of an exposure on different populations.

Measurements used are often proportion or rate

Trends are often used by public health professionals to plan policy, conduct needs assessments and plan services

Examining data over time can given information on potential future scenarios

Moving averages are a helpful way of presenting time series data as it can smooth fluctuations whilst trends can still be visualised. However, if averaged over too long a period then data can be over-smoothed and trends lost.

121
Q

Time series analysis

A

Refers to a set of specialised regression methods that illustrate trends in the data.

3 main types exist:
- autoregression models
-moving averages model
- integrative models

All three of these models can be combined.

These models incorporate information from past observations and errors from previous modelling in order to estimate future predicted values

Time series analysis can account for the fact that data points taken over time may have an interval structure (seasonal variation or autocorrelation)

122
Q

Time series analysis: autoregressive models

A

Autoregressive models are based on the presumption that past values will have an effect on current values. The number of past values which contribute to the current value estimation can be adapted

123
Q

time series analysis: moving averages model

A

A moving averages model estimates future values based on errors in past forecasts

124
Q

Time series analysis: integrative model

A

Adding an integrative term enables a model to account for underlying trends. It does not assume that the mean and variance of values remains the same over time but will account for seasonality etc.

125
Q

Interpretation of data from time trend design studies (6 reasons for caution)

A

-population level data which does not given information on individual risk
- data often collected routinely and not for study purpose
- different population may collect data on exposure and outcome differently
- migration of people between populations throughout study may dilute any differences
-changes in a single population structure (ie age structure) can lead to change over time not related to the exposure
- seasonal variations leads to fluctuations which can hide trends but this can be accounted for in time series analysis (using an integrative model)

126
Q

sampling error

A

Typically when doing a study we cannot study an entire population.

We therefore study a sample of the population.

Any deviation between the study value and the true population value is termed the ‘sampling error’.

Obviously we do not know the true population value but we can estimate the sampling error using stats.

The formula includes a z score (based on the confidence intervals so normally 1.96), the standard deviation of the population and the size of the sample.

A larger sample therefore decreases sampling error.

127
Q

2 main types of sampling and advantages and disadvantages

A

1 Probability sample:
- you start with a complete sampling frame of all eligible individuals from whom you will sample
- can calculate the sampling error
-more generalisable results
- more time consuming and expensive

  1. Non-probability sample
    - does not start with a complete sampling frame so some individuals who fit the study criteria have no chance of being selected.
    -cannot measure sampling error
    - risk of non-representative sample
128
Q

Sampling frame

A

This is your list of eligible participants from whom you will select your sample, it is a database of everyone in your population. No one should be excluded.
ie if studying energy drink consumption in Portsmouth school aged children should have a list of all portsmouth school aged children

But there are challenges and choosing the correct sampling farme is essential. IF the population is large it can be hard to find a sampling frame that include everyone ie in above example could you school lists but what about home schooled children

129
Q

4 types of probability sampling

A
  1. Random
    2 systematic
  2. stratified
    4 cluster
130
Q

Random sample

A

Participants are chosen at random from the sampling frame.

+ Can calculate the sampling error
+ reduces selection bias
X relatively inconvenient
X If variable of interest is rare you may not have sufficient numbers
X can be difficult to contact people who may be geographically scattered

131
Q

Systematic sampling

A

From your sampling frame you select every eg 10th individual

+ more convenient than random sampling
+ allow sampling error to be calculated
x can be prone to selection bias ie if sampling frame is school roll and the students listed alternate male/female, if you chose every 10 they’d all be female

132
Q

stratified sampling

A

Type of probability sampling.

The population are first divided into subgroups and a certain number of people are then chosen from each subgroup. It is used when you might reasonably expect the variable of interest to vary between subgroups and want to ensure proper representation across subgroups.

+ Improves accuracy and representativeness of sample by reducing sampling bias
+ efficient
+ allows sampling error to be calculated

X requires data on the sampling frame (ie gender) which might not be available
X choice of relevant stratification variables can be difficult

133
Q

Cluster sampling

A

Type of probability sampling. In this type of sampling the unit of sampling is a group rather than an individual. The population is divided into clusters which are then randomly chosen to be in the study.

+ Allows sampling error to be calculated
+ convenient and efficient
X Increased risk of sampling error due to increased risk of selection bias and therefore sample non-represetativeness

134
Q

4 methods of non-probability sampling

A
  1. Convenience
  2. Purposive
  3. Quota
  4. Snowball
135
Q

Convenience sampling

A

Type of non-probability sample.

Study participants are chosen based on their ability and willingness to participate.

+ quick and cheap
+ very efficient so good for preliminary research
X volunteer bias
X cannot calculate sampling error

136
Q

Purposive sampling

A

Type of non probability sampling.

Subjects are chosen purposely because they have a certain characteristic

+ Useful for rare characteristics
+ time and cost effective
X sampling error cannot be calculated
X volunteer bias
X may not be representative of the population

137
Q

Quota sampling

A

Type of non-probability sampling.

Begin by determining the characteristics of interest (ie sex) and then select subjects to represent the proportional distribution of these subjects in the population,

+ Representative with regards to the considered characteristics
X cannot calculate sampling error
X volunteer bias
X may be important characteristics which were not considered

138
Q

Snowball sampling

A

Type of non-probability sampling.

Subjects are asked to recommend people they know who may take part in study and have appropriate characteristics.

+ Cheap and efficient
+ useful for hard to reach groups
+ useful where no sampling frame exists
X cannot calculate sample error
X volunteer bias

139
Q

Choice of control within case control studies

A

Case control studies are very vulnerable to selection bias.

The choice of controls will depend on the situation. Controls are people who ‘would have been identified had they had/developed the disease’, it is not just the entire non-diseased population.

2 common ways of obtaining controls are:
1. healthcare controls
2. population controls

140
Q

Healthcare controls for case control studies (advantages and disadvantaged)

A

Purposive sampling strategy (through referral data etc)

+easily identified
+ as unwell they tend to have a fairly good memory of recent events
+ same hospital as cases so same influences in choosing hospital
+ more co-operative than healthy peope
X As people are unwell they are likely different from general population
X different hospitals have secondary and tertiary activity for different specialities
X different specialities have different catchment areas

141
Q

Population controls for case control study (advantaged and disadvantages)

A

random or purposive sampling (cases friends/neighbours etc)

+ if using case friends/neighbours etc then people may be more motivated to participate
+ Healthy
X Less good memory of recent events
X less co-operative
X often out during the day- work etc
X tend to be more costly and time consuming

142
Q

Bias in sampling (5 potential source of bias in sampling irrespective of method used)

A
  1. If selected participants are replaced with others
  2. If any pre agreed sampling rules are deviated from
  3. hard to reach groups are omitted
  4. Low response rates
  5. An out of date list is used for a sampling frame (ie omits people who have moved to the area recently)
143
Q

Random allocation (and 6 types of random allocation techniques)

A

Random allocation of study participants to intervention/control is the gold standard.

If the groups are large enough both known and unknown confounders will be equally distributed.

Eliminates selection bias.

There are several different ways to randomise, 6 are:
- simple (aka unrestricted randomisation)
-Blocked
- Stratified
- Cluster
- Matched pair
-stepped wedge

144
Q

Random allocation: Simple

A

Simple random allocation is when participants are allocated to intervention/control based purely on chance.

AKA unrestricted randomisation

Risk of unequal groups but this is not a problem in larger studies.

145
Q

Random allocation: blocked randomisation

A

Way of achieving equal groups.

First ratio of intervention to control groups are chosen (ie 1:1) then a block size (ie 4). Every permeation of correct block number and ratio is listed (ie AABB, ABBA, ABAB…etc). These blocks are randomly strung together to give order of participant allocation.

146
Q

Random allocation: stratified

A

Participants are first stratified based on a characteristic. Participants are the randomized within groups.

Used in smaller trials to ensure confounders are equally distributed

147
Q

Random allocation: clusters

A

Groups of people, rather than individuals, are randomised

148
Q

random allocation; Matched pairs

A

Participants are first matched based on baseline data (as many variables as possible are matched). One of each pair is then randomly allocated to treatment and the other control.

149
Q

Random allocation: stepped wedge

A

Participants are divided into groups. Intervention is then progressively introduced to groups at random until all groups are receiving the intervention.

Used when there is widespread belief the intervention will be beneficial so cannot use normal randomisation.

150
Q

Systematic allocation

A

-Allocation is determined in advance. Ie alternate recruits or recruits on certain days.

Prone to selection bias as the allocation is predictable and the recruiter may be tempted to interfere with recruitment to influence who gets which treatment.

There may also be an underlying pattern to participant recruitment (ie sicker patients present on weekends) which may lead to systematic differences in the participants allocated to each group

151
Q

Volunteer allocation

A

high unsatisfactory
Based on what treatment recruits volunteer to have
Highly influenced by selection bias

152
Q

Allocation concealment

A

Different to blinding, although blinding is not always possible, allocation concealment always should be.

Means the recruiter does not know which study arm the participant will go into

153
Q

Allocation in intervention studies (3 methods)

A

In intervention studies (unlike observational), the epidemiologist decides who gets treatment and who gets control.

High risk for bias.

In order to reduce allocation bias participants should only be allocated once eligibility has been checked, they have been consented and enrolled.

3 methods of allocation:
1. random
2. systematic
3. volunteer

154
Q

Constructing valid questionnaires (5 considerations)

A
  1. Sample (ensure large enough, complete sampling frame, rando sample chosen from sampling frame with stratification if necessary)
  2. Response rate (use incentives, send advance warning letters, use F2F or telephone rather than post)
  3. Content (does it have content and construct validity? Ensure this by using, already existing tools, literature search, expert opinion, qualitative research.
  4. Quality of the questions asked (use pilot to ensure questions appropriate, train interviewers and provide supporting documents in postal questionnaires)
  5. Triangulate with other sources of information (ie observational studies)
155
Q

Assessing validity of observational studies (2 broad methods)

A

observational studies involve measuring a phenomenon in its natural setting.

Validity is the extent to which something measures what it intends to measure.

There are 2 broad methods of assessing validity:

  1. concurrent validity: compare to the current gold standard instrument
  2. Predictive validity: how well an instrument predicts what it aims to predict
156
Q

Ways to improve validity in observational studies (4 methods)

A
  1. Sample: ensure appropriate size, technique, representation etc
  2. Reflexivity: consider the impact of the observer on what is being observed
  3. Recording: ensure records are comprehensive and systematic
  4. Cross checking: cross check observational data by means of triangulation, repeat observations, recorded observations (ie video)
157
Q

Prognostic studies

A

These are usually cohort studies. Survival analytical techniques are used to analyse time to an event in a group of people who already have a disease. This might be death or remission.

Disease registers can also be used to assess prognosis

158
Q

4 principles of ethics (whose are they and what year)

A

1979
Beauchamp and Childress

Justice
Automony
Non-malefesance
Beneficence

159
Q

Declaration of Helskini (year, who, what, 4 pricinples)

A

1964
World Medical Association
Regarded as the foundation of research ethics

HAIRS:
Helskini
Adhere to approved protocols
Informed consent
Reduce risk
safeguard research subjects

160
Q

Informed consent in research ( 4 criteria that must be met to be informed)

A
  1. Competence
  2. Understand (risks, benefits etc)
  3. voluntary
    4 written documentation (for research purposes)
161
Q

Ethics committees England

A

Any research in NHS in England needs regional ethic committee approval.

In England the National research ethics Service provides training and guidance on research ethics

162
Q

What is required to use identifiable information without consent in research? ( 4 criteria)

A
  1. demonstrate research importance
  2. benefit to society
  3. miminal risk to participants
  4. maintain confidentiality
163
Q

one sided of two sided P values

A

As a rule P values should always be 2 sided, the dependent variable could go in either direction.

One sided P values are only used when there is strong prior opinion that the change would only occur in one direction. Ie if comparing lumpectomy and radical mastectomy there would be a striong prior belief that radical mastectomy would be at least as good as lumpectomy at cancer removal.

164
Q

Choice of outcome measure: correlation study

A

Correlation coefficient

165
Q

Choice of outcome measure: Case control study

A

Odds ratio

odds of exposure in cases (a/c) / odds of exposure in non-cases (b/d)

note the confidence interval will not be symmetrical around odds ratios

166
Q

Choice of outcome measure: Cohort study ( 3 options)

A

1.Risk ratio (relative risk)
(a/a+b)/(c/c+d)

2.Rate ratio
-Incidence of disease in exposed/ incidence of disease in unexposed
-where incidence= number of cases/ number of person-years

3.Standardised mortality ratio = observed deaths/ expected deaths

167
Q

Choice of outcome measure: Intervention study

A
  1. risk ratio (relative risk)
    - (a/a+b)/ (c/c+d)
  2. rate ratio
    - incidence rate in exposed/ incidence rate in unexposed
    - where incidence = number of cases/ number of person-years
  3. Attributable risk
    - difference between the risk if the disease in the exposed and the unexposed
    - it is the proportion of the diseases rate attributable to the exposure
    - risk in exposed- risk in unexposued
  4. Population Attributable risk
    - the proportion of the disease in the whole population that is attributable to the exposure
    - risk in the population - risk in the unexposed
168
Q

Choice of outcome measure: life course analysis

A
  1. survival probability
    - cumulative chance of death in a given time period
  2. Proportional hazards
    - a statistical method for comparing survival rates in different groups
    - if there is no difference between the groups then the ratio of hazards between the groups is constant over time (even though the underlying hazards will change) and the logged cumulative hazard curves will be parallel
169
Q

Epidemic definition

A

Cases of a disease exceed the number of cases normally expected for that disease, at that time, in that place

170
Q

Reproduction numbers (what are the 2 types)

A
  1. Basic reproduction number
  2. Effective reproduction number
171
Q

Basic reproduction number (R0)

A

The number of secondary cases of an infection when one case is introduced to a population where everyone is susceptible.

This number gives an idea of the infectiousness of an organism independent of how many immune people there are in the population.

An infection will only take hold if R0 is >1.

172
Q

Effective reproduction number (R)

A

This is the average number of secondary cases per primary case observed in a population.

R=1 (disease is endemic
R>1 (a the beginning of an epidemic)
R<1 (in order to control an infectious disease)

173
Q

Secondary attack rate

A

The risk of secondary cases in all those people exposed ti a primary case.

Often difficult to determine everyone exposed so in practice household secondary attack rates are calculated

Number of infections in household/number of people living in household.

The secondary attack rate is affected by control and hygiene measures as well as affected by infectiousness of agent

174
Q

Critical population size

A

This is the minimum number of people required to sustain an infectious agent indefinitely so that it becomes endemic ie, a large enough population so there is a suitable probability of infectious agent coming into contact with a susceptible host.

The value will differ depending on population structure and control measures.

175
Q

Epidemic threshold

A

The fraction of a population who must be susceptible for an epidemic to occur. Below this value an epidemic outbreak will not occur.

176
Q

Herd immunity threshold

A

The proportion of a population who would need to be immune for the incidence of an infectious agent to decrease.

Herd immunity threshold= (Ro-1)/ Ro

177
Q

generation numbers: index case

A

The first case recognised in an outbreak (often not the same as the primary case)

178
Q

Generation numbers: primary case

A

The original case in an outbreak (often may only be recognised in retrospect)

179
Q

Generation numbers: secondary case

A

Acquired infection from the primary case

180
Q

Generation numbers: Tertiary case

A

Acquired infection from a secondary case

181
Q

Serial Interval/ Generation interval

A

The period of time between the onset of signs and symptoms in successive cases (ie primary and secondary cases)

Is affected by incubation period, latent period and duration of infectiousness

182
Q

Incubation period

A

Time period between exposure and onset of symptoms

183
Q

Latent period

A

Time period between exposure and onset of infectivity

184
Q

Epidemic curves (uses x4)

A
  • plotting where you are in epidemic and predicting future course
  • Pattern of curve gives clues as to possible source
    -identifying outliers
  • estimating probably time of exposure
185
Q

Epidemic curve: steep rise and fall in case numbers

A

Single source (or point source epidemic)

186
Q

Epidemic curve: sharp rise in number of cases

A

Common source

187
Q

Epidemic curve patterns: plateau

A

Continuous common source epidemic (prolonged exposure period)

188
Q

Epidemic curve: series of progressively taller peaks one incubation period apart

A

Person to person spread (propagated epidemic)

189
Q

Epidemic curve: early outlier

A

Could be:
- source of epidemic
- Unrelated case
- Person exposed earlier than most other infected people

190
Q

Epidemic curve: late outlier

A

Could be:
- unrelated to the pandemic
-person with a long incubation period
- person exposed later than most others
- secondary case

191
Q

surveillance and Exception reporting (what is it)

A

Surveillance involves the routine systematic collection and analysis of data and communication of results

Early warning systems exist to alert people to deviations from the norm as quickly as possible.

192
Q

Significant cluster

A

A greater number of cases in space and/or time than would be expected by chance.

Remember if comparing places of different population density would need to calculate an attack rate.

193
Q

Advantages of combining studies in meta-analysis

A
  • more power and precision
  • cheaper than conducting new studies
    -greater generalisability (results from several studies might be relevant to a wider population)
194
Q

Meta-analysis: Fixed effects model

A
  • Assumes that there is one true population effect size and any difference in effect size seen between studies is due to sampling error
  • can only be used if there is no evidence of heterogeneity (observed interventions/effects are not more different to each other than would be expected by chance)
  • the pooled estimate is calculated by using a weighted average
    -larger studies have much greater weighting than smaller ones in fixed effects models
  • fixed effects models given narrower confidence intervals and smaller P values than random effects models
195
Q

Meta-analysis: random effects models

A
  • Assumes the effect size could vary from study to study due to heterogentiy between studies.
    ie the effect sixe might be smaller or larger in a study where the sample population is older due to different population demographics
    -if it were possible to perform an infinite number of studies the effect sizes of all of them would follow a normal distribution
  • the effect sizes in all the performed studies are assumed to be a random sample of all those possible effect sizes hence the name random effects model
  • there are 2 sources of variance (within studies and between studies)
  • whilst larger studies will still contribute a larger rating to the summary effect size, smaller studies have a larger impact than in fixed effects models
    -used if heterogeneity between studies (but if too much heterogeneity they shouldn’t really be combined!!)
  • they give wider confidence intervals and larger P values than fixed effects models
196
Q

Bias in meta-analysis: where does it arise from

A
  • poor quality studies
  • publication bias (detected using funnel plot)
  • inadequate confounding adjustment (observational studies)
197
Q

Thesaurus terms (what are they and advantages and disadvantages)

A
  • MeSH terms are a type of thesaurus term
  • A list of standardised headings used by database indexers to describe what an article is about
    -used to make finding citations easier
  • each source within a database is assigned MeSH terms
  • searching MeSH terms allows them to be searched as a major concept (the search will only return records for which subject heading is a major point of the article) or to ‘explode’ the search (expands the search so that search also retrieves any narrower connected terms)

Positives: automatically covers all synonyms, American/British spellings, plurals and singular

Disadvantages: time delay between publication and indexing so thesaurus may not keep pace with new areas of research

198
Q

2 main search term types when searching databases

A
  • keywords
  • thesaurus terms

(combination can be used)

199
Q

Limitations of electronic bibliographical databases

A
  • no database contains all publications
  • bias to English language papers
  • time delay between publication and presence in database
  • covers limited number iof years
200
Q

What is grey literature?

A

Grey literature is written material published by a body whose primary activity is not publishing

201
Q

7 examples of grey literature

A
  • scientific/ technical reports
    -theses
  • internal NGO reports
  • governmental publications
  • fact sheets
  • conference papers
  • unpublished reports
202
Q

Advantages and disadvantages of grey literature

A

Advantages:
- easier to find with internet
- provides less orthodox views
- provides perspective to published material

Disadvantages:
-can be hard to access, especially older paper only reports
- No quality control so reader has to assess quality and credibility

203
Q

methods for detecting/reducing publication bias (4)

A
  • funnel plots at meta-analysis
  • register of trials prior to beginning with primary outcome outlined
  • discouragement of trials with insufficient power to detect an effect size
  • publication of study protocols
204
Q

what is evidence based medicine and who first used the term

A

The explicit use of current best evidence in order to inform the treatment plan for an individual patient

First used by Gordon Guyatt in 1990 at McMaster medical school in Canada

205
Q

Advantages (5) and disadvantages (8) of evidence based medicine

A

Advantages:
- explicit used of best evidence
- limits patients receiving harmful, ineffective or suboptimal tests/treatments
- reduces the importance of clinical opinion
- can support standardised healthcare
-can enable cost effective healthcare

Disadvantages:
- limited by the available research
- limited by publication bias
- limited by retrieval bias (limitations of databases)
- can lead to loss of patient voice/ important clinical nuance in decision making
- reduces importance of clinical experience
-lack of evidence does not = lack of benefit
- often a lack of evidence for non-drug treatments
- evidence may not be generalisable to the individual patient in front of you

206
Q

The enlightenment model

A

A model of the impact of research on policy. In the enlightenment model reserach impact involves subtle, incremental and diffuse adjustments over a long period of time.

207
Q

hierarchy of evidence (old model and new systems)

A

used to be
1. systematic reviews/meta analysis
2. RCT
3. Cohort
4. case control
5. cross sectional surveys
6. case reports/ series

Now recognised this is too simple so different systems exist for ranking studies.

  1. levels of evidence scheme
  2. Grades of recommendation, assessment, development and evaluation (GRADES)
208
Q

GRADE reasons for upgrading study (4)

A
  1. large effect size
  2. dose response evident
  3. If all plausible residual confounding would reduce a demonstrated effect
  4. If all plausible residual confounding would suggest a spurious effect if no effect was observed.
209
Q

What is GRADES and how is it used

A
  • A system for ranking the quality of evidence in systematic reviews and other evidence syntheses.

Assessed quality of evidence into 4 categories:
1. High
2. Moderate
3. Low
4. Very low

Evidence from RCTs start as high quality whereas those containing observational data start as low quality.

Depending on a range of factors studies can be downgraded or upgraded

210
Q

GRADE reasons for downgrading study (5)

A
  1. Publication bias
  2. Indirectness (population studied is different to those for whom the recommendation applies)
  3. Inconsistency ( several studies showing consistent effects)
    4.imprecision (if the clinical decision would differ if the true effect size was at the bottom of the CI rather than the top)
  4. risk of bias
211
Q

Cochrane collaboration

A

-Established on 1993
-Named after Archie Cochrane (a notable contributor the development of epidemiology as a science(
- International, non-profit, independent organization
- Produces and disseminates systematic reviews and promotes the search for evidence
- Systematic reviews od RCTs are published as part of the cochrane database of systematic reviews
- the cochrane library also includes the health technology assessment database

212
Q

Family studies

A
  • Used to consider whether a given disease has a heritable component
  • Uses familial relative risk- are family members of an affected disease at greater risk that the general population

ie sibling FRR= risk of disease in siblings/ risk of disease in gen. pop.

a High FRR is necessary but not sufficient to demonstrate a heritable component to a disorder

213
Q

twin studies (what are they, what is assessed, what are their weaknesses)

A
  • used to consider relative contribution of genes and environment to development of a disease
    -uses monozygotic (indentical) and dizygotic (non-identicle) twins
    -identicle twins
    -Concoradance is measured in identical twins and compared to non-identicle

there are 2 types of concordance:
- pairwise: the percentage of twins where noth twins are affected in a group of people where at least one is affected
probandwise: The proportion of twins whose twin becomes affected with a disease during the course of a study.

Limitations:
- identical genes does not be identical gene expression
-identical twins do not necessarily have the same ineruterine environment (ie twin twin transfusion)
- twins may differ from gen. pop. so results may not be externally valid and generalisable

214
Q

Linkage studies (what are they and what are they used for)

A
  • Used to identify broad genomic regions that might contain a disease causing gene
    -based on the premise that if a disease ‘runs in the family’ then genetic markers that ‘run in the family’ in the same pattern are likely to be close in the genome
  • genes that are close together are more likely to be inherited together as recombination events in between them are less likely
215
Q

genetic epidemiology: define linked

A

2 genetic loci are linked if they are transmitted together from parent to offspring more often than would be expected under independent inheritance

216
Q

Genetic epidemiology: define linkage disequilibrium

A

Two genetic loci are in linkage disequilibrium, if across the whole population, they are found together on the same haplotype (group of genes inherited from a single parent) more often than expected

217
Q

Genetic epidemiology: How do you measure linkage

A

LOD score (Logarithm of the odds of linkage)

A high positive score is evidence for linkage

218
Q

genetic epidemiology: limitations of linkage studies

A
  • only identify broad genomic regions were a gene may be located
  • generally only rare, highly penetrant, recessive illness show strong linkage patterns
219
Q

Genetic epidemiology: Association studies (what are they and what are they used for)

A
  • measure the relative frequency with a which a particular polymorphism occurs together with the disease of interest in the population (ie the extent to which the polymorphism is associated with the disease)
    -normally conducted using a case control study
  • if the odds of having a particular polymorphism is higher in cases than in controls then the allele may either have a causal role or be correlated with the causal allele
220
Q

genetic epidemiology: the difference between linkage and association studies

A
  • association studies typically follow linkage studies
  1. linkage study identifies broad genomic region for disease causing gene
  2. association study allows specific genes in this region to be investigated
221
Q

genetic epidemiology: limitations of association studies

A

Many different mutations in a gene can lead to disease. Therefore the effect of any single mutation may be attenuated by the presence of other genes leading to no association being found.

222
Q

what is Epidemiology

A

Epidemiology is the study of the determinants, distribution and frequency of disease (who gets the disease and why).

223
Q

what is descriptive epidemiology

A

Examines the distribution of disease in a population and observes
the basic features of its distribution

224
Q

what is analytic epidemiology

A

Investigating a hypothesis about the cause of disease by studying
how exposures relate to disease.

Requires information from descriptive epidemiology

uses case-control and cohort studies (RCTs are experimental epidemiology)

225
Q

Dimensions of descriptive epidemiology: person

A

Age, Sex, Ethnic group, Genetic predisposition, Concurrent disease, diet, physical
activity, smoking status, risk taking behaviour, SES, education, occupation

226
Q

dimensions of descriptive epidemiology: place

A

Presence of agents or vectors, Climate, Geology, Population density, Economic
development, Nutritional practices, Medical Practices.

227
Q

dimensions of descriptive epidemiology: time

A

Calendar time, Time since an event, Physiologic cycles, Age (time since birth),
Seasonality, Temporal trends

228
Q

three characteristics studies in analytic epidemiology

A

epidemiologic triangle:
- host
- agent
- environment

Epidemics occur when host, agent and environment are not in balance ie due to new agent, change in an existing agent, change in number of susceptible, environmental changes. think COVID 19

229
Q

Experimental epidemiology

A
  • RCTs
  • Non randomized experimental studies
230
Q

single blinded RCT

A

participant doesn’t know what they are getting

231
Q

double blind RCT

A

participant and administrator do not know what they are getting

232
Q

triple blind rct

A

participant, administrator and analyser do not know who received what

233
Q

Point prevalence

A

Proportion of population with an illness at a point in time

As a proportion it is a number between 0–>1. If you multiple by 100 you get a percentage

Frequently quoted in epidemiology as a number per population

234
Q

data requirements for indirect standardisation

A
  • size of the study population in each age group
  • observed total number of events in the study population
  • age-specific event rates in a reference (standard) population
235
Q

what does relative risk mean? interpret a relative risk of 2

A

ALWAYS TALK ABOUT ‘TIMES MORE LIKELY’ (don’t use percentage change)

RR=2 means that disease occurrence is 2 times more likely
in exposure group than in non-exposure group.

or it is half as likely in the non exposure group as the exposure group

236
Q

what does relative risk mean? interpret a relative risk of 1

A

ALWAYS TALK ABOUT ‘TIMES MORE LIKELY’ (don’t use percentage change)

means no effect of exposure

237
Q

Interpret a RR risk of 4

A

ALWAYS TALK ABOUT ‘TIMES MORE LIKELY’ (don’t use percentage change)

4 times more likely

238
Q

interpret a RR of 0.4

A

ALWAYS TALK ABOUT ‘TIMES MORE LIKELY’ (don’t use percentage change)

if the RR is <1 the exposure has reduced the incidence

in this instance you are 0.4 times more likely to get the disease (but this sounds ridiculous)

do 1/RR and change words to LESS likely

1/0.4 = 2.5

You are 2.5 LESS likely to get the disease

239
Q

define odds

A

the odds of an outcome is the number of times the outcome occurs to the
number of times it does not.

240
Q

interpreting odds ratio

A

always say the ODDS are (x) times greater/less

DO NOT SAY MORE LIKELY (this is only for relative risk)

241
Q

interpret an OR of 3.2

A

The odds of the event are 3.2 times greater

242
Q

interpret an OR of 0.32

A

1/0.32= 3.125

the odds are 3.13 times less

243
Q

when does the odds ratio approximate to the relative risk

A

when the event is rare

244
Q

When is the odds ratio for a disease the same as an odds ratio for an exposure?

A

For a case control study the odds ratio for a disease is ALWAYS the same as the odds ratio for an exposure