Ch.5 Assessing the Validity and Reliability of Diagnostic and Screening Tests Flashcards Preview

Epidemiology gordis (Revised) > Ch.5 Assessing the Validity and Reliability of Diagnostic and Screening Tests > Flashcards

Flashcards in Ch.5 Assessing the Validity and Reliability of Diagnostic and Screening Tests Deck (25)
Loading flashcards...
1
Q

What is a Binomial Curve? How does it distinguish between individuals who have illness and those who do not?

A

Binomial Curve - Is a distribution in which there are two peaks.

Although some individuals will fall in between the two curves in the known as the “gray zone”, most of the population will be easily distinguished using the two curves as either normal or abnormal.

2
Q

Define a Unimodal Curve. How do we separate individuals whom are ill from those who are not?

A

Unimodal curve - A single peak distribution.

Ideally we would choose a cutoff value based on the biological significance that is associated with increased risk of the subsequent disease.

3
Q

Define Validity of a test and what are the 2 components that comprise it?

A

Validity - Is the ability to distinguish between who has the disease and who does not have the disease.

Which has 2 components:
Sensitivity - The ability of the test to identify correctly those who have the disease.

Specificity - The ability of the test to identify those who do not have the disease.

4
Q

Define Specificity and Sensitivity of a test.

A

Specificity- the portion of non-diseased people who were correctly identified as “negative” by the test.

Sensitivity- Is the proportion of diseased people who were correctly identified as “positive” on the test.

5
Q

True or False: In order to calculate the specificity and sensitivity of the test we cannot know who has the disease and who does not.

A

False. In order to calculate the specificity and sensitivity of the test we must know who really has the disease and who does not. By comparing our test results with some “gold standards” or external source of “truth” regarding the disease status for quantitive purposes.

6
Q

What is the equation used for the Sensitivity?

A

Sensitivity = True Positive (have disease and have positive test) / True positive + False Negative (Have disease but have negative test)

7
Q

What is the equation used for the Specificity?

A

Specificity = True Negative ( No disease and have negative test) / True Negative + False Positive ( No disease but have positive test)

8
Q

What issues arise due to false positives and false negative test results?

A

During a false positive are screened again using a more sophisticated and expensive test but never lose the stigma of being labeled as positive, while false negative results in effective early intervention if any is available and thus is given death sentence depending on the seriousness of the disease.

9
Q

What test results are created if we set either the sensitivity or specificity at 100%? Thus, how is a decision made when setting a cut off value?

A

If the Specificity is set at 100% we create many False negative results.

If the Sensitivity is set at 100% we create many False positive results.

  • The choice of a high or low cut off value for a screening test depends on the importance we attach to the False positives and False Negatives for the disease in question.
10
Q

True or False: When using Multiple tests they can either be administered sequentially or simultaneously.

A

True.

11
Q

True or False: In Sequential or 2 stage testing a less expensive or less invasive test is used first and those who screen positive are recalled for further testing.

A

True

12
Q

True or False: When you use simultaneous testing; tests A and B must be used at the the same time and to be identified as positive by either test A,B or both in order to be considered positive for specificity. (numerator)

A

False: When you use simultaneous testing; tests A and B must be used at the the same time and to be identified as Negative by either test A,B or both in order to be considered positive for specificity.

13
Q

True or False: When you use simultaneous testing; in order to calculate the numerator for net sensitivity we cannot just add the # of people who tested positive on the tests A,B or both because we do not want to count them twice.

A

True

Ex) 200 people take tests A and B; Test A has 80% Sensitivity ( 200 X .80) = 160 positive where of the 160 test B has a specificity of 90% ( 160 X .90) = 144 were positive by both tests. In order to calculate the remaining positive people by each individual test ; where test A (160-144= 16), test B (200 X .90=180, 180-144=36). Thus finally adding all the remaining numerator together in order to acquire the net sensitivity (16+144+36/200)=(196/200)=98%

14
Q

True or False: When you use simultaneous testing; in order to calculate the numerator for net specificity we cannot just add the # of people who tested positive on the tests A,B or both because we do not want to count them twice.

A

true:

Ex) 800 people take tests A and B; Test A has 60% Sensitivity ( 800 X .60) = 480 negative. Test B has a specificity of 90% ( 800 X .90) = 720 were negative. In In order to calculate the net specificity of the two tests we use the specificity of test A and B (480 X.90= 432); net specificity =(432/800)=54%

15
Q

What is the difference in approach in the net gain or loss of specificity and sensitivity when using either simultaneous or sequential testing?

A

In simultaneous testing there is a gain in the sensitivity and a loss in the specificity of the testing,while in sequential testing there is a loss in sensitivity and a gain in specificity.

  • Where the decision of which approach to use is based on diagnostic or screening purposes.
16
Q

What is the positive and negative predictive value of the test?

A

The positive predictive value refers to the portion of the patients diagnosed by the test actually have the disease;while the negative predictive value is the the portion of patients who does not have the disease.

17
Q

How do you calculate the predictive value of a test?

A

In order to calculate the predictive value of a test you must dived the # of true positive or negative value by all those who tested either positive or negative.

18
Q

What 2 factors that effect the predictive value of the test?

A

The two factors that effect the predictive value of the test are the 1) prevalence of the disease within the population and if the disease is infrequent; 2) the specificity of the test being used.

19
Q

True or False: If the test results cannot be reproduced, the value and usefulness of the test are great.

A

False. If the test results cannot be reproduced, the value and usefulness of the test are Minimal they must be repeatable results to be of great use

20
Q

True or False: During intra-subject variation conditions in which lead to different results in the same individual can be due things such as the time of day that the test was performed.

A

True

21
Q

True or False: Inter-observer variation signifies the degree in which one observer makes two or more variation in the readings

A

False it is Intra-observer variation signifies the degree in which one observer makes two or more variation in the readings.

22
Q

True or False: Inter-observer variation signifies the degree in which observers agree or disagree, as 2 examiners do not derive the same result.

A

true

23
Q

Define Percent Agreement. And include its Equation.

A

Percent Agreement- Is a schematic for examining the variation of agreement between two observers.

Percent Agreement = (a/a+b+c) X 100

24
Q

What 2 Questions do you need to ask in order to understand the Kappa Statistic?

A

1) How much Better is the agreement between the observers readings than would be expected by chance alone?
(Percent Agreement Observed - Percent agreement by chance alone )
2) What is the most that the two observers could have improved their agreement?
(100% - Percent Agreement by chance alone)

25
Q

Define Kappa Statistic. Calculate the Kappa Statistic.

A

Kappa Statistic- Quantifies the extent to which the observed agreement exceeds that which would be expected by chance alone relative to the most that the observers could hope to improve their agreement.

Kappa Statistic =(Percent Agreement Observed - Percent agreement by chance alone )/(100% - Percent Agreement by chance alone)