Assumptions of screening

Important disease, must be detectable before symptoms, preclinical disease must progress to outcome, treatment must be available to avert outcome and treatment before symptoms must be better than waiting for symptoms.

Characteristics of a good screening test

Safe/rapid/cheap, acceptable to screened population, reliable and valid.

Does the test give the right answer (are the results consistent with the gold standard) in diseased patients?

Sensitivity

Does the test give the right answer (are the results consistent with the gold standard) in healthy patients?

Specificity

Validity vs. reliability

Validity: does the test give the correct answer constant with gold standard diagnosis. Reliability: does it repeatedly yield same result on same sample.

500 men with prostate cancer had PSA > 5. Of the 1000 healthy controls, 50 had PSA > 5. What is the sensitivity of this test? What is the specificity of the test? What is the positive predictive value?

500 have the disease, 200 tested positive. 200/400 = 0.4 sensitivity. 950 didn’t have the disease and tested negative, 1000 didn’t have the disease: 950/1000 = .95 specificity. 200/250 tested positive = 0.8 positive predictive value.

If a test is 100% sensitive, what is the chance of having disease if test is positive? What is the chance of disease if the test is negative?

1) It depends on specificity 2) 0%. Negative result in a 100% sensitive test rules out disease. “SNOUT” Sensitivity rules out disease when test is 100% sensitive.

If a test is 100% specific, what is the chance of having disease if test is positive?

1) Positive result w/100% specific test = rules in disease. “SPIN” specificity rules in disease when test is 100% specific.

Are patients who test positive truly diseased?

Positive predictive value

Are patients who test negative truly healthy?

Negative predictive value

How can you calculate predictive values from sensitivity of 0.4, specificity of 0.95 and prevalence of 0.1?

Make a fake table with a sample population of 1000 and fill in the blanks. Now you know that 40/85 will give you the positive predictive value = 0.47.

If prevalence decreases from 10% to 5% what will happen to sensitivity, specificity, positive predictive value and negative predictive value?

Sensitivity and specificity stay the same. PPV goes down (because there are fewer true positives) and NPV increases (because there are fewer false negatives).

How do you choose a cutoff for a test like PSA that has varying levels of specificity and sensitivity based on where you set the cutoff?

Choose a more sensitive cutoff if a missed case is really bad. Choose a more specific cutoff is a false positive is really bad. The best test would have high sensitivity and specificity w/area under the curve closer to 1 than 0.5.

What are likelihood ratios?

Calculated likelihood of disease given a certain test result. You will need disease prevalence (pre-test odds of disease determined by physician) and test characteristics (sensitivity/specificity) to assess this.

What do these likelihood ratios mean for the patient? How do you convert the pretest odds to post-test odds?

+LR = 3x more likely to have + test than those without the disease. -LR = 0.3 times more likely to have a negative test that those without the disease. You then multiply the pretest odd x LR to get post-test odds. Post-test odds are then converted back to post-test probability.

Your patient has a pretest prevalence of 10% (1:9 pretest odd). The +LR is 10. What is the post-test prevalence?

Multiply the odds on the left side by the +LR = 10:9. Turn back to prevalence (10/(10+9)) = 52%. This person started out having a 10% chance of having disease, but after his positive test he has a 52% chance of having the disease.