Midterm - Lectures 1-13 Flashcards

(176 cards)

1
Q

What am I asking when I adjudicate a hypothesis?

A

Do I believe this intervention is a risk factor for disease or not?

Do I believe this intervention will work or not?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The purpose of a research study is

A

to resolve a specific, clearly stated question posed by investigators in public health or medicine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a research hypothesis?

A

A statement derived from a theory that predicts the relationship among variables representing concepts, constructs, or events.

Are specific, testable predictions about what will happen under a given set of circumstances or conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do we use research hypotheses in Public Health?

A

To determine what might make people sicker or healthier, to inform public health policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is epidemiology?

A

The study of the distribution and determinants of healht-related states or events in human populations and the application of this study ot control health problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is biostatistics?

A

learning from data in the PH field

Use it to develop and apply statistical methods to scientific research for the advancement of health

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the goal of causal inference?

A

To determine the impact of exposure X on outcome Y in population P.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Research Question asks/specifies:

A

What is going to happen?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Research Hypothesis asks/specifies:

A

what you think will happen, based on existing evidence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Operationalization

A

To interpret findings or analyze the data or for the study to have meaning, must know specifically what is covered in each x, y, p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Vaughn Method

A

1) Identify the causal hypothesis: exposure to intervention or antecedent x, will cause or be related to change in outcome y among entities in population P.
2) Operationalize your X, Y, P.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why do we operationalize the variables?

A

So we can identify threats to internal and external validity, and make sure its replicable, so the results can be generalizable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Representative:

A

the ability of your sample to harken back to the original population of interest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

External validity

A

The ability to generalize sample findings to other persons, places, settings, or times

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Random

A

each person (or entity) has an equally likely chance of being selected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

systematic sampling

A

sampling has a system to it

may not be bad

may have consequences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Convenience Sampling

A

Sample from those close by who fit certain parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Sample on the outcome

A

What we do when the outcome is rare:

choose people based on disease status, make sure groups match in every way except disease

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Study Design

A

Isolate and hold all else constant except the causative agent…includes:

1) existance of a comparison group
2) # of measurement points
3) group allocation criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

RCT

A

gold standard.

1) sample from the population
2) randomly assign to groups or study arms
3) measure y or other important constructs before exposure to the intervention to ahve a baseline
4) implement intervention in 1 arm, standard of care in the other
5) measure again post intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

As you move away from RCT, what is it harder to do?

A

infer causation; have more threats to internal and external validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Quasi-Experimental Study

A

looks like an RCT, but there is non-random assignment to study arms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Two types of observational studies

A

cohort study

case-control study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Cohort Study

A

subjects are selected based on exposure and followed forward in time to assess disease onset (longitudinal)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Case-control study
subjects are selected based upon disease status and prior exposures are generally assessed retrospectively
26
Cross-sectional study
sample of population is selected and exposure/disease status is assessed simultaneously
27
Name 4 types of variable measurement
Dichotomous, Categorical, Ordinal, Continuous
28
Dichotomous
binary variables (0/1, yes/no)
29
Categorical
Fixed # of 3 or more unordered categories | white/black/latino
30
Ordinal
Fixed # of 3 or more ordered categories | excellent, fair, poor
31
Continuous
can take on any number of unlimited distinct values between a theoretical min and max
32
Histograms
used to describe center and spread via frequency counts or percentages.
33
A right skewed diagram
Has a tail to the right
34
A left skewed diagram
Has a tail to the left
35
IQR
interquartile range q3-q1
36
If we add "c" to each observation in a sample, what happens to each measure of central tendency/spread?
New mean = old mean + c New median = old median + c New Standard Deviation = old Standard Deviation
37
If we multiply each observation by "c" in a sample, what happens to each measure of central tendency/spread?
new mean = old mean * c New median = old median * C New standard deviation = old standard deviation * c
38
3 steps to measuring disease occurance
1) define the population 2) define what a case is 3) specify what info you want and why
39
4 common measures of disease frequency
1) Counts: # of people w/ disease 2) Proportions (fraction of the pop. affected) 3) Rates (how fast disease is occuring in a pop; involves element of time) 4) ratios (info about disease in 1 group relative to disease in another)
40
7 common proportions for disease frequency
Point prevalence Period prevalence cumulative incidence attack rate case fatality rate mortality rate infant mortality rate
41
Population at risk
must be free of disease at beginning of the follow=up period, and must be susceptible (have the right organs)
42
How is cumulative Incidence often denoted?
x people/100 people years, or as a percent, always remember to note time
43
How is prevalence denoted?
as a %
44
What is the difference between prevalence and incidence?
prevalence is in the past (already) vs incidence has to do with new
45
How is incidence rate denoted?
per unit of person time (person years, days, etc)
46
Factors that influence prevalence
``` long duration increased incidence improved detection immigration of patience emigration of healthy people ```
47
Types of epidemic curves
propogated point source continuous source
48
screening is
the examination of asymptomatic people in order to classify them as likely or unlikely to have a certain disease; we screen to catch the disease early on so we can interevene
49
preclinical phase
period between disease onset and observable symptoms
50
detectable preclinical phase
period between detection and first observable symptoms
51
Sensitivity
liklihood of testing positive, given i have the disease
52
specificity
liklihood of testing neg, given i dont have the disease
53
Pos Predictive Val
liklihood i have the disease, given I test positive
54
Neg Predictive Val
liklihood i dont have the disease, given I test neg
55
Overall probability
a marginal probability
56
conditional probability
P(A|B)
57
Joint probability
P(AB)
58
When are highly sensitive tests preferred?
when risk to false positive is low and risk to false neg is high
59
When are highly specific tests preferred?
when risk to false pos. is high and risk to false neg is high.
60
Sensitivity and Specificity are
properties of the screen, and independent of prevalence a change in prevalence will not impact these two numbers have an inverse relationship to one another
61
When is false neg an issue?
If the next step is acceptable and early detection improves prognosis, we want to catch as many diseased as possible here we prefer sensitivity vs specificity
62
When are false pos an issue?
If the next step is invasive/risky and/or improvement from early detection is small, we want to minimize these (here specificity more important than sensitivity)
63
Lead Time Bias
Bias introduced by detecting a disease earlier, giving the impression of beneficial effect because the patient appears to live longer, even if we do nothing
64
Epidemic
illness occurs in excess of normal expectency
65
endemic
normal expectency of disease or incidence
66
point source epidemic
many infected in a short time, hten incidence returns to normal no to little secondary transmission whatever caused the disease is no longer there
67
propagated epidemic
person to person spread - peaks get increasingly larger over time, but have similar space between peaks multi peaks because of incubation and latency periods
68
continuous source epidemic
spread out distribution of cases over time… may plateau if the source isnt removed.
69
7 steps in an outbreak investigation
1) define epidemic 2) look for variables related to risk of getting disease 3) develop hypotheses 4) test and retest hypotheses as appropriate 5) recommend control measures 6) prepare a written report 7) communicate findings
70
Case Definition
What we call the diseased. infectious diseases have necessary causes. are often based on non-specific clinical symptoms so may misclassify some there is a trade off between sensitivity and specificity here
71
Infectious diseases are special because
they violate the independence assumption my risk of disease affects your risk of disease
72
3 factors determine if you transmit your infection to someone else
1) beta - probability of passing on infection given one contact with a susceptible 2) contact - how many susceptibles you come into contact with 3) duration - how long your'e infectious
73
Ro =
B*C*D; basic reproductive number p(transmission) * contacts over time * duration of infectious time of susceptibles that will become infected by 1 vector before they stop being infectious, assumes population is composed entirely of susceptibles
74
if Ro is <1
no epidemic
75
If ro = 1
endemic
76
if ro >1
epidemic
77
Ret =
effective reproductive number at time t, = Ro(St/N) where St = susceptibles left at time t and n is # in population
78
When do epidemics peter out?
when Ro*(St/N) is less than or equal to 1
79
What do we need to do with a higher Ro?
vaccinate more!
80
what do we need to do with a lower Ro?
vaccinate less!
81
If necessary and sufficient:
if you have disease, you have the exposure if you have hte exposure, you have the disease.
82
If necessary but not sufficient
If you have the exposure, you dont necessarily have the disease If you have the disease, you have to have the exposure.
83
Sufficient but not necessary
if you have the exposure, you have the disease. if you have the disease, you havent necessarily been exposed.
84
Neither necessary nor sufficient
Both exposed and unexposed have disease. Both diseased and non-diseased have exposure. Some non-diseased have exposure, some do not.
85
5 steps for causal inference in research
1) develop a theory of causation 2) test hypothesis 3) design and conduct a study 4) analyze data 5) interpret results (how certain am I x causes y)
86
3 things to minimize in research studies
Random Error Bias Confounding
87
Random Error
results from measurement errors and sampling variability (by chance)
88
Bias
Systematic error in the design and/or conduct of any aspect of a study that results in an incorrect estimate of x, y, or p or the estimate of an exposure's effect on the risk of disease
89
Confounding
other "third" variables associated with both exposure and disease that distort the estimate of an exposure's effect on the risk/rate of disease
90
JS Mill Method of Difference
A causes B if, all else held constant, a change in A is accompanied by a change in B
91
1960s web of causation
good for chronic illness; complex web of interconnected host and environmental factors
92
A.B. Hills 9 Causal Criteria
``` Strength of Association Consistency Specificity Temporality Biological Gradient Plausibility Coherence Experiment Analogy ``` useful when examining a body of literature
93
Strength of Association
how differenct risk of disease is in exposed vs the unexposed; stronger associations more likely to be causal than weaker associations be careful its not due to confounding stronger correlation between exposure and outcome are measured by OR or RR
94
Consistency
findings are consistent with other data; replication is possible across other persons, places, settings and times. (can be due to similar confoundings across studies or the lack thereof can be due to different operationalization of x, y, and p)
95
Specificity
specific exposure is associated with one disease be careful: cause can have multiple effects; an effect can have multiple causes
96
Temporality
Cause must preceed effect in time.
97
Biological Gradient
Dose-response relationship Be careful of thresh hold effects (risk of disease only appears above a certain exposure level) and curvilinear effects (think sun betweeen vitamin D and skin cancer)
98
Plausibility and Coherence
is it biologically plausible? is it consistent with the current biologic/social knowledge? casue and effect relationship cant conflict with known natural history and biology of disease (but many cause/effect relationships were determined before the identification of the biologic mechanism)
99
Experiment
can we manipulate it to show it? Strive to hold all else equal with RCT (may not be feasible, may be unethical)
100
Analogy
Are there similarities beteween this observation and other known observations?
101
Rothman's Sufficient-Component Cause Model
may be multiple mechanisms that cause a disease; each disease will occur with a sufficient cause useful when developing a theory or hypothesis
102
Necessary Cause
causal component that is a member of every sufficient cause
103
to prevent disease do we need to know every component cause?
no - just blocking one will prevent disease
104
Sufficient Cause
its enough - complete mechanism to get the disease. may be more than one sufficient cause/disease.
105
component cause
each participating factor in a sufficient cause
106
Limitations of Sufficient-Component Cause Model
omits origins, focuses on the proximal. Outlines the components, but doesnt explain their linkages.
107
How does randomization help with confounding?
Helps avoid it by distributing measured and unmeasured 3rd variables between both arms
108
What threatens randomization/all else equal?
if the groups change after randomization through attrition; improper follow through, etc.
109
Intent to Treat
analysis based upon group allocation, regardless of exposure to the intervention no post-hoc group manipulation
110
Per Protocol Analysis
analysis based upon exposure to intervention - can lead to confounding - people usually quit for a reason!!!
111
Length Time Bias
Less aggressive diseases are more likley to be picked up in a screening program because of longer detectable pre-clinical phase. As less aggressive forms typically have better survival, it appears that screening leads to longer survival.
112
Table % or probabilities are
joint probabilities - because show % of both x AND y
113
Row or column probabilities are
Conditional probabilities, because show x GIVEN y
114
Marginal Probabilities
end row of the 2x2 table probabilities
115
Measures of Association
Relationship between two factors, not just the incidence or prevalence of one
116
Two types of measures of association
relative absolute
117
Name 3 relative meausres of association
risk ratio rate ratio odds ratio
118
What are two other names for the risk ratio?
Relative Risk Cumulative Incidence Ratio Compares culmulative incidences or risks
119
What is a written intepretation of a risk ratio number?
Over the specified time period, individuals in the exposed group experienced, on average, X times the risk the disease in comparison to the unexposed group
120
What do you need to use the risk ratio?
all the population should be at risk at the start of the study follow up should be the same for all participants at the end of the study all should either be D+ or D-
121
Incidence Rate Ratio - other name? and when do you use?
Rate Ratio shows how much greater the incidence rate is between exposed and unexposed group. Use for unequal follow up
122
Interpretation of Incidence Rate Ratio
Individuals in the exposed group have x times the incidence rate of disease during the follow up period than those in the unexposed group.
123
Odds ratio
Comparison between odds of disease
124
When are the IR and OR similar?
When the outcome is rare (disease prevalence <10%)
125
What is the attributable proportion?
How much disease would be removed from a population if we remosed the exposure of interest % of disease incidence attributable to the exposure
126
What can you do to a null hypothesis?
reject or do not reject
127
Alpha is
the probability of a type 1 error
128
what is more grave? an alpha error or a beta error?
alpha error
129
When do we reject?
when the null is not a good description of what is going on… when its not normal to get a result like the one you got.
130
When do we DNR?
when the null is a good description of what is going on…when it IS normal to get a result like the one you got.
131
variability is another word for
dispersion, spread, uncertainty
132
Two-sample t-test
Given pop. p and binary exposure x, were the mean values of outcome y the same for the exposed and unexposed?
133
just because data is statistically significant, doesnt mean its
clinically significant.
134
standard error:
variability of sample means; if we reran the same test over and over and over again, the variability around each mean is the standard error.
135
standard deviation
the variability of the original measurements
136
paired t test is asking
given a single pop. p, and paired responses on outcome y, is the mean change score = 0 or not?
137
What does science prefer - evidence or facts?
evidence
138
Facts - how are they complicated?
Need to be acquired with human senses & have many people involved made complicated by different defintions, values, and lenses, as well as imperfections in recording or collecting
139
What is evidence, what does it imply
is ground for a belief, proof; something that makes evident implies ongoing evaluation process - can be manipulated
140
What is evidence comprised of?
Tangible/Quant Intangible/Qual
141
Quantitative Data
Info that is systematically collected and that employs a set of rules or use of instruments to minimize human judgement
142
Qualitative Data
Info that is systematically collected, and that requires human judgement
143
When is evidence considered stronger (3 things):
1) When the source/creator is known 2) When the same thing is said repeatedly 3) When the same definitions are used 4) clear rules of observation and measurement
144
What doe quant and qual exist on?
a continuum…can be quant data, but can also be qual data will determine collection and analysis BUT criteria for strong vs weak should be the same
145
What is Measurement?
The act of quantifying or categorizing something of interest
146
What are the 2 skills of public health?
1) figuring out what is? | 2) skill of listening
147
What is triangulation?
The process by which a finding is explored by comparing the conclusion from different sources Consistency across a large body of evidence
148
What is validity ?
Am I measuring what I think I'm measuring?
149
What does validity require?
consideration of definitions, method of measurement, the quality of records
150
What is reliability?
Reproducability, Repeatability, or Consistency… is stronger the more reproduceable it is
151
What factors negatively impact reliability?
Test may not be reproduceable Conditions or methods may have changed different people running the test, or inept people running the test mistakes are possible
152
What type of scale is reliability on?
A sliding scale - hard to be 100% reliable
153
What are 2 ways to assess reliability?
1) Test-retest reliability | 2) Internal Consistency
154
Test-Retest Reliability
Use tool to measure x at T1…wait and measure same thing in same way again at T2 and compare
155
Internal Consistency
Ask the same thing more than 1x in different way to see if we get same story over and over again "Overall Reliability Correlation Coefficient" (Cronbach's Alpha)…if high, means we have good internal reliability
156
What is accuracy?
Accuracy combines both ideals of validity and reliability
157
Can Data be reliable but not valid?
yes
158
Can data be valid but not reliable?
No. Reliability is necessary but not sufficient for validity because cant be measuring what you want if you arent hitting the same thing 2x.
159
How do you assess validity?
Compare it to a gold standard
160
Problem with comparison to gold standards for validity?
not all things have a gold standard (think qualitative data)
161
What did Tukey say?
1) When the right thing can only be measured poorly, people often measure the wrong thing, because it can be measured well. 2) Its often worse to have a good measure of the wrong thing -- especially it will be used as an indicator of the right thing -- than to have a poor measure of the right thing 3) Must measure what is needed for policy guidance, even if it can only be measured poorly
162
Tradoffs when dealing with no gold standard for intervention (3)
1) regret of waiting for quality info 2) frequency of symptoms/probs arising before 3) are there surrogates for what you want?
163
reproduceabiliity
getting the same result through similar or identical measures
164
Theory
Well-substantiated explanation of some aspect of the natural world; an organized system of accepted knowledge that applies in a variety of circumstances to explain a specific set of phenomena
165
Hypothesis
a statement derived from a theory that predicts a relationship among variables representing concepts, constructs, or events. More narrow in scope than a theory.
166
Science
A systematic attempt to establish theories to explain observed phenomena and the knowledge obtained through those efforts
167
Occams Razor
All things equal, we prefer the most simple explanation
168
Popper
If I can find evidence that disproves it, toss the theory away. Theories can never be proven, only disproven The longer a theory is around, the more likely it is to be true
169
Inductive Reasoning
Conclusions dont flow from premises, and may contain info not in the premises (i.e. start narrow and let logic flow to broader conclusions not found in original info)
170
Deductive Reasoning
Conclusion flows necessarily from the premises…moves from general to specific - is bad, because means we can devine new insight!
171
Falsification
requires a testable and rejectable hypothesis. Usually means measureable, time bound, with outcomes that have widely accepted definitions and ways to be measured What is or isnt falsifiable is very tough in the political world
172
Hume's 3 factors of causes
Temporal Order :) Spatial relationshiop :( Constant conjunction (cause always around with effect) :(
173
JS Mill & Causes
Temporal Order Plausibility (is the mechanism between A & B possible) Rule out Alternative Explanations (confounding) Cause should be manipulated to show influence on outcome (control group)
174
What is the dominant model of science?
Empirical Falsification
175
What should hypothese and theories be?
Falsifyable to contribute to the mainstream body of scientific knowledge
176
Who has given us a template of causation for health problems?
A.B. Hill