Module 5: Critical Thinking Flashcards

(126 cards)

1
Q

What is internal validity

A

Is our study estimate an accurate estimate of the actual value in the source population. I.e are there other explanations for the study findings, other than them being right?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the three factors to consider in internal validity

A

Chance, bias, confounding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is external validity

A

The extent to which the study findings are applicable to a broader or different population (also known as generalisability) Judgement depending on what is being studied and who it is being applied to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is sampling error

A

If you continuously sampled from the same source population, most of the time you would get a sample with a similar composition to the population you sampled from but some samples would be quite different just due to chance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How can sampling error be mitigated (can’t eliminate but can reduce)

A

Increase sample size: less sampling variability, increases likelihood of getting a representative sample and precision of parameter estimate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the statistical definition of a 95% confidence interval

A

If you repeated a study 100 times with a random sample each time and got 100 confidence intervals, in 95 of the 100 studies the parameter would lie within that study’s 95% confidence interval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the interpretation we use of the 95% confidence interval (CI can be applied to any numerical measure)

A

We are 95% confident that the true population value lies between the limits of the confidence interval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What effect does increasing the sample size have on the confidence interval

A

Makes it narrower

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

When is a study clinically important

A

When the confidence interval is entirely below the clinical importance threshold (a different value to null)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are p values

A

Probability of getting study estimate (or one further from the null) when there is really no association, just because of sampling error. If probability very low, unlikely that estimate is due to sampling error. Probability of finding an association when there actually isn’t one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the null hypothesis

A

That there really is no association in the population (parameter = null)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the alternative hypothesis

A

That there really is an association in the population (parameter does not equal null value)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the threshold for determining how unlikely is acceptable for a p value

A

<0.05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How is a p value of <0.05 interpreted

A

Reject null hypothesis, accept alternative hypothesis, association is statistically significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How is a p value of >0.05 interpreted

A

Fail to reject null hypothesis, reject alternative hypothesis, association is not statistically significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a type 1 error

A

Finding an association when there truly is no association

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a type 2 error

A

Finding no association when there truly is an association. Incorrectly fail to reject the null hypothesis when should’ve

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Why do type 2 errors occur

A

Typically due to having too few people in the study (bigger sample size = more likely to get small p)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How can statisticians work out how to minimise type 2 errors

A

Calculate power to work out how many study participants are needed to minimise chance of a type 2 error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

If the confidence interval includes the null value what is the p value

A

p>0.05, not statistically significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

If the confidence interval does not include the null value what is the p value

A

p<0.05, statistically significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Why are p values problematic

A

Arbitrary threshold, only about the null hypothesis, nothing about importance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

At the 5% threshold when will a statistically significant association be found when there really isn’t one

A

At least one time in twenty (wrong 5% of the time)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the problem with p values regarding importance

A

If you include enough people in your study you’ll find a statistically significant difference, even if people were randomly assigned. Statistical significance is not clinical significance- don’t say anything about whether the results are useful, valid, or correct. Absence of a statistically significant association is not evidence of absence of a real association

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is bias
Any systematic error in a study that results in an incorrect estimate of the association between exposure and risk of disease
26
What is systematic error
Error due to things other than sampling (opposite of random error)
27
When can selection and information bias be controlled
Only during the design and data collection phases of a study. So investigators must identify potential sources of bias and identify ways to minimise these
28
What is selection bias
When there is a systematic difference between the people included in a study and those who are not, or when study and comparison groups are selected inappropriately or using different criteria
29
What can affect who is part of a study (selection bias)
How people are recruited, whether people agree to participate, whether everyone remains in the study
30
How can loss to follow up be reduced
Alternative contact details obtained at start of study, maintaining regular contact throughout study, making several attempts to contact people
31
What must be considered regarding selection bias in cross sectional studies
Who entered the study, is the sample representative of the source population, what is the response rate
32
What must be considered regarding selection bias in case control studies
Participants are selected based on their outcome status. If this is in some way dependent on their exposure status, then bias can occur. Must ensure high participation, clearly define population of interest, reliable way of ascertaining all cases or a representative sample of cases
33
What are the potential biases in selection of controls
Ensure controls are from the same defined population as the cases over the same time period, same inclusion and exclusion criteria for cases and controls, ensure high participation
34
What would happen to the odds ratio if cases who are exposed are more likely to be identified or to participate
Overestimation of harmful effect of exposure, OR is biased away from the null, numerically upward
35
What is a harmful factor when MOA is underestimated
Biased numerically downward toward the null
36
What is a protective factor when MOA is underestimated
Biased numerically upward toward the null
37
What would happen to the odds ratio if cases who are exposed are less likely to be identified or to participate
Underestimation of harmful effect of exposure, OR is biased toward the null, numerically downward
38
What are the common types of selection bias in cohort studies
Loss to follow up (if related to exposure and outcome this can lead to bias), comparison group selected separately from exposed group can lead to bias
39
What does a loss to follow up in the exposed group result in
Underestimate incidence proportion in exposed group, resulting in underestimated relative risk, RR biased numerically downward toward the null
40
What is a source of selection bias in RCTs
Loss to follow up
41
When is it important to consider systematic error
When critically appraising scientific literature, in evidence based practice, considering studies reported in the media, undertaking research
42
What is information bias
Observation or information bias results from systematic differences in the way data on exposure or outcome are obtained from the various study groups
43
How is data collected in a study
By participants or collected/measured by someone else
44
How can measurement error occur
Participants provide inaccurate responses, data is collected incorrectly/inaccurately
45
What is measurement error
Can be random: lack of precision, or systematic: lack of accuracy.
46
What effect can measurement error have in descriptive and analytic studies
Descriptive: inaccurate measurement of prevalence Analytic: misclassification
47
What is non-differential misclassification
Not different between the study groups. Measurement error and misclassification don't occur equally in all groups being compared. Normally RR moves toward null
48
What is differential misclassification
Different between study groups. Estimate can move toward or away from the null
49
What are some examples of differential misclassification
Cross sectional: people with outcome might report exposure differently to those without outcome. Case control: cases might more accurately recall past exposures compared to controls, interviewers may probe cases more (or exposed in cohort studies)
50
What is a source of information bias in case control studies
Recall bias
51
What is recall bias
Systematic error due to differences in accuracy or completeness of recall to memory of past events or experiences
52
How can recall bias be minimised
Objective measures (instead of self reported subjective measures), validate self reported measures with other information, memory aids
53
What is interviewer/observer bias
E.g interviewer/observer knows exposure status and examined outcome differently for those in exposed group compared to comparison group
54
How can interviewer/observer bias be minimised
Clearly defined study protocol and measures, standardised structured questionnaire and prompts, training of interviewers, blinding
55
How could bias occur in RCTs
If knowledge of the treatment/exposure category influences the assessment of the outcome (i.e ensure blinding), if measurements are undertaken differently for different treatment groups
56
How can information bias be minimised
Validated survey instruments, objective measures. Use standardised equipment, calibrated equipment, ensure blinding, structured interviews, trained interviewers, well defined exposures/outcomes, etc
57
What is publication bias
Positive, new findings more likely to be published and available. "The result of the tendency of authors to submit, organisations to encourage, reviewers to approve and editors to publish articles containing positive findings"
58
What is confounding
A mixing or muddling of effects when the relationship we are interested in is confused by the effect of something else - the confounder
59
What does the saying risk factors party together mean
People with one risk factor tend to have multiple others
60
What are the three properties of a potential confounder
Independently associated with the outcome, independently associated with the exposure, not on the causal pathway
61
What does independently associated with the outcome mean
A risk/protective factor for the outcome by itself
62
What does independently associated with the exposure mean
Different proportions of people with potential confounder across exposure groups
63
What does not on the causal pathway mean
Not the mechanism by which the exposure affects the risk of the outcome
64
What can confounding do?
Over/under estimation of true association, give appearance of association when there isn't one, change direction of true association (e.g risk factor becomes protective)
65
How can potential confounders be identified?
Collect information on all potential confounders: use literature to identify known and suspected risk factors for outcome, collect information on factors strongly associated with exposure regardless if known risk factor. If you don't measure it, difficult to do anything about it later. Look for imbalance in potential confounder between groups
66
How can confounding be controlled in the study design
Randomisation, restriction, matching
67
What is randomisation and when can it be done
Design study to minimise confounding by selection and allocation of participants. Can only be done in RCTs
68
What is restriction
All attempt to make groups being compared alike with regard to potential confounders. Restrict sample to one stratum of a potential confounder (e.g one age group)
69
What is necessary for randomisation
Large sample size, equipoise, intention to treat analysis
70
What are the cons of restriction
Can reduce generalisability, number of potential participants. Potential for residual confounding with imprecisely measured (or broadly defined) confounders, usually only one potential confounder
71
What is matching and when does it occur
Choose people to make the control/comparison group have the same composition as the case/exposed group regarding the potential confounder. Usually in case control studies
72
What are the two types of matching
Individual (each case matched with one or more controls) and frequency (matching at an aggregated level: same proportions between groups)
73
What are the positives of matching
Useful for difficult to measure/complex potential confounders, can improve efficiency of case control studies with small numbers
74
What are the cons of matching
Individual matching can be difficult and limit number of participants. Need special matched analysis for individual matching, otherwise will underestimate measure of association
75
What is the main problem with randomisation, restriction and matching
Can't assess whether truly a confounder.
76
How can confounding be controlled in the study analyses
Stratification, multivariable analysis, standardisation
77
What is stratification
Calculating measure of association for each stratum of potential confounder and comparing them.
78
What is the crude MoA
Overall, not stratum specific
79
If crude MoA does not equal stratum specific MoA,
Confounding is present
80
What are the pros of stratification
Easy for small number of potential confounders with limited strata. Can evaluate impact of confounding. Can identify effect modification
81
What are the cons of stratification
Can leave residual confounding, not feasible when dealing with lots of potential confounders with many strata
82
What is multivariable analysis
Statistical method for estimating MoA whilst controlling for multiple potential confounders. Can work in situations where stratification won't. Variety of different techniques recognisable by the term "regression"
83
What is standardisation
E.g age standardisation: age structures differ and disease risk varies by age
84
What are the cons of standardisation
Similar issues as stratification with multiple potential confounders/number of strata. Multivariable analysis often more efficient in analytic studies
85
What are the main issues in controlling for confounding in study analyses
Residual confounding, can only control what you've measured (still need to think about potential confounders and what you could do about them)
86
How much change (between crude MoA and stratum specific/adjusted MoA) indicates confounding
If controlling for confounding changes the measure of association by 10% or more
87
What is effect modification
The association between exposure and outcome differs across strata of the effect modifier: an important finding (i.e after adjusting for confounding)
88
What is the difference between confounding and effect modification
Confounding is a third factor distorting the association, effect modification is difference between the exposure and outcome across strata of the effect modifier (an important finding)
89
What is association
Does the exposure increase or decrease occurrence of the outcome. Need to consider internal validity of apparent associations
90
What is a cause
An event, condition or characteristic that plays an essential role in producing an occurrence of the disease
91
What is the causal pie model
Whole pie = sufficient cause for an outcome. Each exposure is a component of the sufficient cause, so each exposure is a component cause. Exposures can be part of more than one sufficient cause
92
What is a component cause
An exposure required to have an outcome (size of component cause in causal pie is not relevant)
93
What is a necessary cause
A component cause which is necessary for the disease to occur (must be part of every sufficient cause). If you eliminate a necessary cause you entirely eliminate the outcome
94
How do we determine causation
1. Is the association internally valid 2. Consider each of the guidelines and then make a judgement based on the totality of evidence (biological plausibility, experimental evidence, specificity, temporal sequence, consistency, dose response relationship, strength of association)
95
What are the guidelines to determining causation (BEST CDS)
Biological plausibility, experimental evidence, specificity, temporal sequence, consistency, dose response relationship, strength of association
96
What is biological plausibility
Is there a plausible mechanism for the association
97
What is experimental evidence
Is there evidence from human RCTs or animal experiments (will only be applicable in RCTs)
98
What is specificity
Is the exposure specifically associated with a particular outcome but not others? If it is this adds weight to causal likelihood
99
What is temporal sequencing
Is there evidence of temporal sequence between exposure and outcome
100
What is consistency
Are the findings consistent with findings from other studies (can be a number reasons why they may not be)
101
What is the dose response relationship
Does the risk of the outcome change with increasing or decreasing amounts of the exposure (not all relationships are linear)
102
What is strength of association
The stronger the association the less likely it is to be due to confounding or bias
103
How are the guidelines used to make a judgement determining causality
Consider them all, then make a judgement based on the totality of evidence (is there another explanation for the findings more likely than cause and effect?)
104
What are the two main types of review
Narrative and systematic
105
What is a narrative review
Often broad in scope, not usually specified, potentially biased, variable, often a qualitative summary, sometimes evidence based. May be heavily influenced by opinion
106
What is a systematic review
Often a focused clinical question, comprehensive sources and explicit search strategy, criterion-based selection, uniformly applied, rigorous critical appraisal, quantitative summary, usually evidence based. Replicable, transparent and systematic
107
Why are systematic reviews done
Collate evidence, synthesise results. Done well, reduce bias that may otherwise be encountered with narrative reviews
108
How are systematic reviews conducted
1. Formulation of a clear question 2. Write protocol for review 3. Search for relevant studies 4. Collect data from studies 5. Assessment of included studies 6. Synthesis of findings 7. Interpretation of results
109
What are the protocol methods for conducting a systematic review
1. Question (PECOT) 2. Relevance (importance) 3. Objectives 4. Search strategy (specific and thorough) 5. Selection criteria 6. Eligibility screen 7. Risk of bias 8. Data extraction 9. Data synthesis (may involve meta-analysis)
110
What is meta-analysis
The results of individual studies are combined to produce an overall statistic. Aims to provide a more precise estimate of the effects of an intervention and reduce uncertainty
111
What are the limitations of meta-analyses
Designs of studies are too different, outcomes measured are not sufficiently similar, concerns about quality of studies
112
What do forest plots show
Compare many studies, diamond overall meta-analysis
113
When is it unethical to continue a cumulative meta-analysis
When subsequent trials continue to narrow CI and move toward null
114
What are the challenges of meta-analyses
Doing all steps well, publication bias, poor quality trials/studies, heterogeneity can lead to conflicting reviews and inconclusive results
115
What's the difference between a systematic review and meta-analysis
A systematic review answers a specific research question by evaluating and summarising all the studies on the topic. A meta-analysis goes on to use statistical methods to combine the data from the studies.
116
What are the pros of meta-analyses
Reproducibility, rigour lead to increased confidence. Comprehensive, transparent limits, gaps in knowledge, basis for decisions
117
What is good about Cochrane
Global, independent, international and interdisciplinary network, not for profit, no commercial sponsorship, freely available to everyone in NZ
118
What is critical appraisal
The process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context
119
What does the abstract include
Summary of paper contents, main findings
120
What does the introduction include
Background to the research, what was already known on the subject, what they wanted to investigate with the study, aims and objectives of the study
121
What does the methods include
Selection of participants, structure of the study, definition of exposures and outcomes measured, how demographics, exposures and outcomes were measured/classified, methods used to control for confounding and statistical analysis
122
What does the results include
Reporting of all results in text, tables and figures. Assessment of chance, bias, confounding
123
What does the discussion include
Strengths and challenges experienced during the study, evidence for causation, researchers' assessment of the implications of the results, importance of this information
124
What does the conclusion include
Outlines what the study adds to current knowledge, where to from here
125
What does critical appraisal require
Epidemiological knowledge, systematic and structured approach, note taking, reasoning and logical thought, use of frameworks to aid you in extracting information from papers
126
What does table 1 show
Participants' baseline characteristics