Oct 25 Flashcards

1
Q

What can tests that perform fairly well against the gold standard test be used for?

A

-not thrown out completely -can be used as a pre diagnostic test (not as good as the diagnostic one but is good enough to pick up cases then use the gold standard to for sure diagnose)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are consequences of a false positive?

A

-caused by low specificity -emotional: telling someone they are sick when in fact they are healthy -financial: re-testing or needlessly treating

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are consequences of a false negative?

A

-caused by low sensitivity -sick people will go untreated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the receiver operating characteristic curve?

A

-useful for comparing screening or diagnostic tests -test closer to the top left corner is the best (high TP and low FP) -anything crossing 0.9 is considered excellent, 0.8 is considered good

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is positive predictive value?

A
  • what is the probability you actually have the disease if you test positive (denominator is positive tests)
  • TP/(TP+FP)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is negative predictive value?

A
  • probability that the disease is not present when the test is negative (denominator is negative tests)
  • TN/(TN+FN)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How would you predict a PPV of 20%?

A
  • I just got a positive result back
  • What are the chances my patient actually has the disease?

1 in 5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How would you predict a 90% NPV value?

A

-what are the chances my patient doesn’t actually have the disease?

9 in 10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is reliability?

A
  • replicability of the test results over repeated administrations of the test
  • reliable test produces the same result over multiple administrations when the patient’s underlying condition remains unchanged throughout the time period of the administrations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are threats to reliability?

A
  • intrasubject: normal biological variability in human characteristics (eg BP)
  • intraobserver: variability in two or more readings of the same test by the same examiner

-interobserver: variability in the same test reading by two or more examiners

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How is reliability measured?

A
  • percent agreement
  • Kappa
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How is percent agreement calculated? What are problems with this?

A
  • take concordant pairs between physician A and B
  • add them and divide by total
  • can be inflated if physicians mostly agree on a negative result but don’t agree much on a positive result
  • only agreed on the negative part not the positive part
  • chance agreement: examiners use different criteria to arrive at the same result
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How is Kappa calculated?

A

-corrects for chance agreement

  • how much of the agreement was caused by chance alone?
  • calculate percentage of cases that physician A determined were positive and negative
  • take the proportions from the first part and multiply them by how many cases physician B determined to be positive and negative
  • take concordant boxes and add them and divide by the total which gives the value of the agreement by chance alone
  • then calculate Kappa by taking observed agreement-agreement by chance/100-agreement by chance
  • 0.75 is good Kappa
  • below 0.4 is not good
How well did you know this?
1
Not at all
2
3
4
5
Perfectly