Variables and Significance Flashcards
(16 cards)
Variable
Any data point or characteristic that can be measured or counted
Ex: age, gender, BP, pain
Can be clinical endpoints
-death, stroke, hospitalization or an adverse event
Can be intermediate (or surrogate) endpoints used to assess an outcome
-measuring serum creatinine
to assess the degree of renal impairment
Independent vs Dependent Variables
Independent variable is changed (manipulated) by the researcher in order to determine whether it has an effect on the dependent variable (the outcome)
Independent
-Ex: drugs, drug dose/s, placebos, patients included (e.g., age, gender,
comorbid conditions
Dependent
-Ex: HF progression, A1C, BP, cholesterol, mortality
To show significance…
the trial needs to demonstrate that the null hypothesis is not true and should be rejected, and the alternative hypothesis can be accepted
*null hypothesis and alternative hypothesis are always complementary; when one is accepted, the other is rejected
Null Hypothesis
Null means none or no
H0 states that there is NO statistically significant difference between groups
Is what researcher tries to disprove or reject
Alternative Hypothesis
HA states that there IS a statistically significant difference between groups
Is what researcher tries to prove or accept
Alpha Level
When investigators design a study, they select a maximum permissible error margin, called alpha (a)
Alpha is the threshold for rejecting the null hypothesis
Alpha is commonly set at 5% (or 0.05)
Alpha correlates with the values in the TAILS in a normal distribution
(A smaller alpha value can be
chosen (e.g., 1%, or 0.01), but this requires more data, more subjects (which means more expense) and/or a larger treatment effect)
P-Value
P-Value is compared to the Alpha
If alpha is set at 0.05 and the p-value is less than 0.05, the null hypothesis is rejected and result is STATISTICALLY SIGNIFICANT
p-value < alpha (e.g., p < 0.05)
If the p-value is greater than or equal to alpha (p ≥ 0.05), the study has failed to reject the null hypothesis, and the result is NOT statistically significant
p-value 2 alpha (e g . p ≥ 0.05)
Confidence Interval
A confidence interval (CI) provides the same information about sinificance as the p-value, plus the precision of the result
-Alpha and the Cl in a study will correlate with each other.
CI = 1 - a
lf alpha is 0.05, the study reports 95% CI, then an alpha of 0.01 corresponds to a CI of 99%
Alpha vs P-Value vs Meaning
alpha 0.05, p >= 0.05
-not statistically significant
alpha 0.05, p < 0.05
-statistically sig: 95% confidence conclusion is correct (less than 5% it’s not)
alpha 0.01, p < 0.01
-statistically sig: 99% confidence that conclusion is correct (less than 1% it’s not)
alpha 0.01, p < 0.001
-statistically sig: 99.9% confidence that conclusion is correct (less than 0.1% it’s not)
Statistical Significance Based on CI ONLY
Comparing Difference Data (Means)
-The result is statistically significant if the Cl range does not include zero (e.g, zero is not present in the range of values)
Comparing Ratio Data (RR/OR/HR)
-The result is statistically significant if the Cl range does not include one (e.g., one is not present in the range of values)
Narrow CI =
High precision
*preferable
Wide CI =
Poor precision
Type 1 Errors
FALSE POSITIVES
Probability of Type 1 Error is
CI = 1 - a*(type 1 error)
When alpha is 0.05 and result is reported with p < 0.05, it is statistically significant and probability of type 1 error is < 5%
-You are 95% confident (0.95=1-0.05) that result is correct and not due to chance
The alternative hypothesis was
accepted and the null hypothesis was rejected in error
Type 2 Errors
FALSE NEGATIVES
The probability of a type Il error, denoted as beta (B), occurs
when the null hypothesis is accepted when it should have been rejected
B is typically set at 0.1 or 0.2
-meaning risk of type 2 error is 10 or 20%
The risk of a type Il error increases if the sample size is too small
Study Power (+calc)
Power is the probability that a test will reject the null hypothesis correctly
-the power to avoid a type 2 error
Power = 1 - Beta
As the power increases, the chance of a type 2 error decreases
-Larger sample size needed to increase study power
Power determined by:
-the number of outcome values collected
-the difference in outcome rates between groups
-the significance (alpha) level
If Beta is 0.2, study has 80% power so there is a 20% chance of missing a true difference and making a type 2 error
If Beta is 0.1, study has 90% power