Week 8. Evaluating Diagnostic Literature Flashcards

1
Q

Validity

A
  • Is it true? Can I believe it? Are the outcome measures trustworthy and accurate?
  • Extent that a measure assess what it is intended to measure.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Applicability

A

If valid and important, can/should I apply it to my patients

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Use of Diagnostic Tests

A

PT’s have increased access to DI but it should not replace clinical assessment/tests
E.g. Shoulder imaging — physicians order shoulder imaging to facilitate referral to surgeon but after prolonged wait periods, surgeon refers patient to PT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Diagnosis Research Goals

A
  1. Evaluate whether a test gives additional information about presence/absence of a condition
  2. Evaluate whether clinical test provides similar information as an invasive or radiological test
  3. Evaluate whether a diagnostic test is able to distinguish between patients with and w/o a specific condition
  4. Avoid invasive tests/x-ray exposure, more carefully define injured structures/tissues to customize treatment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Clinical Prediction Rules (CPR)

Ottawa Ankle Rules

A
  • Used for diagnosis
  • A rule or model that tries to identify the best combination of S&S, and other findings for predicting the probability of a specific outcome
OAR
- sensitivity: 96-99% 
- specificity: 26-48% 
if negative, low chance of fracture 
(point tenderness at posterior edge (of distal 6 cm) or tip lateral malleolus. point tenderness at posterior edge (of distal 6 cm) or tip medial malleolus. inability to weight bear (four steps) immediately)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Level of evidence in diagnostic study design:

Why can’t RCT be used in Dx studies

A
  • RCT cannot be done in Dx studies as all subjects must undergo both tests
    Level 1 evidence in Dx studies
  • Cross-sectional
  • Cohort study designs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Methodological Issues in Diagnostic Research

  • Gold Standard Test
A

Inappropriate gold standard/reference test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Methodological Issues in Diagnostic Research

  • Verification Bias
A

Verification Bias:

results in test influence the decision to have the gold standard test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Methodological Issues in Diagnostic Research

  • Selection/referral bias
A

Selection/Referral Bias:

Evaluation done in a Population with a high prevalence of disease or investigators pick study participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Methodological Issues in Diagnostic Research

  • Measurement Bias
A

Measurement Bias

  • Testers are aware of gold standard tests results which bias outcome
  • Outcomes for what constitutes positive/negative are not well-defined
  • Testers unable to complete diagnostic test properly
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Sensitivity (SnNout)

A

Sensitivity: likelihood of a +test in presence of disease (true positive rate)

SnNout:

  • a negative result on a highly sensitive test is a good way to rule out people who don’t have the condition
  • Example: airport security, if highly sensitive will pick up all kinds of metal, so no buzz = no metal, but lots of false positives, but you don’t miss things
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Specificity (SpPin)

A

Specificity: likelihood of a -test in the absence of disease (true negative rate)

SpPin: a highly specific test will not falsely identify people has a condition, a positive result on a highly specific test is likely to accurately detect the presence of a condition
- Example: airport security, if airport sensor is turned down, it would be highly specific (metal = buzz)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Positive Prediction Value

A

Likelihood of disease in the presence of +test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Negative Predictive Values

A

Likelihood of not having a disease in the presence of a negative test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Positive/Negative predictive values table

  • Rows calculate?
  • Columns calculate?
A
Rows = Predictive Values
Columns = sensitivity/specificity 

(TP)True+ (FP) False+ Total who test positive
(FN)False- (TN) True+ Total who test negative

Total w. Total w/o
Disease. Disease

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Accuracy

A

Accuracy = (a + d) / (a+b+c+d)

= (TP + TN) / (TP + TN + FP + FN)

17
Q

Sensitivity calculation (true positive rate, TPR)

A

Sensitivity = a / (a+c)

  • true positive divided by total number with disease.
  • this is the probability of positive test if subject has disease, also called true positive rate
18
Q

Specificity Rate (True negative rate, TNR)

A

Specificity = d/ (b+d)

  • computed as true negatives divided by total number without disease
  • defined as probability of negative test if subject does not have disease; true negative rate (TNR)
19
Q

Positive Predictive Value

A

PPV = a / (a+b)

  • computed as true positive divided by total number that tested positive
  • defined as probability of disease if subject has a positive test; true negative rate (TNR)
20
Q

Negative Predictive Value

A

NPV: d / (c+d)

  • true negative divided by total number that tested negative
  • probability of no disease if subject has a negative test
22
Q

Specificity vs. NPV vs. -LR

A

Specificity (d / b+d)

  - do not have disease 
  - probability of negative test 

NPV (d / c+d)

  - negative test 
  - probability of no disease

-LR: probability of a negative test result given the presence of the disease and the probability of a negative test result given the absence of the disease, i.e.

23
Q

Which aspect is dependent on prevalence of disease?

A

PV are dependent on prevalence of disease, while sensitivity/specificity are not

  • PV are meaningless out of context of prevalence
  • Sensitivity and Specificity are dependence on diagnostic threshold; more consistent BETWEEN studies
24
Q

Sensitivity and specificity

- more reliable INTER- or INTRA?

A
  • most consistent between studies

- diagnostic threshold for a sp diagnostic test defined as min or max requirement to obtain a positive result

25
Q

Receiver Operator Characteristic Curves (ROC Curves)

A
  • 3-way relationship between sensitivity, specificity, and diagnostic threshold
  • curve shows trade-off between sensitivity and specificity with changing diagnostic thresholds

ROC values
0 = terrible
1 = ideal

26
Q

Positive Likelihood Ratio

  • ratio indicates?
  • probability that a person?
  • Value indicates?
A

Sensitivity / (1-specificity)

  • ratio of true positive rate to false positive rate
  • probability of a person with a positive test has the disease
  • larger numbers indicate higher likelihood of disease
27
Q

Negative LIkelihood Ratio

  • ratio indicates?
  • probability that a person?
  • Value indicates?
A

NLR: (1 - sensitivity) / specificity

  • ratio of true negative rate to the false negative rate
  • used to determine the probability with a negative test does not have the disease
  • smaller numbers indicate higher likelihood of NO disease
28
Q

LR+ of 7.29 and LR- of 0.166
If individual takes test for the disease, we can update their probability of disease by multiplying odds by the likelihood ratio

A

If test is positive, updated odds of disease: (1/99) x 7.29 = 0.0736
If test is negative, updated odds of disease (1/99) x 0.166 – 0.00168
Odds of disease increases from 1% to 7.4% with positive test and decreases from 1% to 0.16% with a negative test

29
Q
\+LR.     -LR.   
> 10.      < 0.1
5-10.     0.1 - 0.33
3-5.       0.34 - 0.99 
<3.         > 1
A

Almost conclusive
Useful
Marginally Useful
Likely not important

30
Q

Clinical Utility of DI Statistics: Sensitivity/Specificity

A
  • most common reported values

- can calculate LR from these values

31
Q

Clinical Utility of DI Statistics: PV

A
  • limited useful ness because less stable estimates (depends on population tested/prevalence of disease)
32
Q

Clinical Utility of DI Statistics: LR

A
  • Most clinically useful because they contain both sensitivity/specificity values in 1 ratio
33
Q

Example: A new ‘special test’ for the shoulder has been developed to test for a presence of a rotator cuff tear
Want to compare results of the new test to a known standard

A = 50
B = 10
C = 15
D = 25
Accuracy? 
Sensitivity? 
Specificity? 
PPV? 
NPV? 
\+LR?
-LR?
A
Accuracy = 75/100 
Sensitivity = 77% 
Specificity = 71%
PPV = 83%
NPV = 62%
\+LR = 2.7 (likely not important)
-LR = .32 (may be useful)
34
Q

Sensitivity vs. PPV vs. LR+

A

Sensitivity (a / a+c): probability of a positive test if subject has disease (TPR)

  - they have the disease 
  - probability of positive test recognizing

PPV (a / (a+b): probability of disease if subject has a positive test

  - they have a positive test 
  - probability of actually having disease 

LR+ (sensitivity / (1-specificity)): probability of a positive test result given the presence of the disease and the probability of a positive test result given the absence of the disease,