STATS Flashcards Preview

QBANK MIX > STATS > Flashcards

Flashcards in STATS Deck (21):
1

When the data are normally distributed and conform to a bell-shaped curve, what are percents that ligh between the mean +/- 1, 2, and 3 SD respectively

When the data are normally distributed and conform to a bell-shaped curve, 67%, 95%, and 99.7% of the observations lie between the mean +/- 1, 2, and 3 SD respectively (choice C).

2

The median is

also
known as the 50th percentile. It is the value that lies in the center of the data points when they are arranged in numerical order:
10, 11, 11, 11, 13, 13, 14, 14, 15, 15, 16, 17, 18, 22, 24 Bottom Line: The median is also known as the 50th percentile.

3

Qualitative variables are divided into

nominal and ordinal variables.

4

A nominal variable is

named category. An example is an individual’s favorite ice cream flavor (chocolate, vanilla, or rocky road).

In the clinical setting, one often sees this broken down into a yes or no answer, as in survival of the patient or the presence or absence of a complication (bionary is nominal)

When a nominal variable consists of only two categories (such as yes and no), it is also referred to as a dichotomous variable.

5

An ordinal variable

usually is seen in ranking scales,

such as in injury severity scores.

An example is how miserable an individual feels on a particular day, measured on a scale of 1 to 10;

6

Ordinal variables are tested with

nonparametric statistics.

The Wilcoxon signed rank test is the most common nonparametric test used primarily to examine the differences between two paired treatments using ordinal variables.


7

Chi-Squared

This test determines differences between treatments for nominal variables (ice cream flavor stats).


8

Unpaired t-test

This test is used to compare two unpaired treatments using quantitative variables.

number stats



9

The Wilcoxon signed ranked test

This test is used to compare two paired treatments, using ordinal variables.

two paired (ranked) tests to compare number outcomes

10

Incidence

the number of specified new events, e.g. number of people falling ill with a specified disease, during a specified period, in a specified population. people years


11

Prevalence

the number of cases of a disease existing in a given population at a specific period of time or at a particular moment in time.

12

Normally distributed data is

parametric

and thus conforms to a bell-shaped curve

It is symmetric about the mean and all three measures of central tendency: the mean, median, and mode are generally equal or very close in value

13

Parametric tests

, such as Student’s t test, mandate several assumptions about the variables being considered, namely that they are normally distributed.

Parametric tests are generally considered to be more powerful than nonparametric tests (choice E) and measurements should be, at a minimum, on an interval scale.
Nonparametric data, on the other hand, is generally skewed and therefore does not fit a bell curve. Discrete data, such as that measured on a nominal or ordinal scale (choice C), consists of only a few possible values or categories and will not follow the normal distribution. It should therefore be analyzed using methods other than Student’s t test (choice D). Assessment of nonparametric data generally involves the use of contingency tables and significance testing uses methods such as the Mann-Whitney U (also known as Wilcoxon rank-sum) or Chi Square tests.

14

Sensitivity describes

the proportion of patients who have a disease (D+) that will test positive for the disease (T+).

15

The higher the specificity of a test

the lower the likelihood of obtaining a false-positive result. Specificity, therefore, is also equal to 1 – the false-positive rate.

This makes it a good screening test because if it says you are negative you are negative

16

Positive predictive value

gives the percentage of patients with a positive test result who actually have the disease

low positive predictive value will have a high likelihood of producing false positives and will require confirmatory testing with a more reliable test.

calculated by dividing the number of true positives by the total number of positive test results.

17

Type II errors

when a statistically significant difference DOES exist between the population means and yet it is not detected

and The null hypothesis (H0) is inappropriately accepted.

18

The probability of committing a type II
error is estimated by

BETA (type 2 = b)

generally set at 0.20

(i.e., the researcher accepts a 20% chance that they will incorrectly accept the null hypothesis).


19

A common reason of committing a type II error is

underpowered.

the study sample is too small to detect the difference that actually exists.

20

Type I errors are committed by

This is the number one error seen in publications (because people don't publish non significant results..)

final conclusion states that a statistically significant difference exists between the population means when, in fact, one does not

and The null hypothesis H0 is incorrectly rejected.

21

The probability of
committing a type I error is predicted by

ALPHA (type 1 = alpha)

generally set at 0.05

(i.e., the researcher accepts a 5% chance of incorrectly rejecting the null hypothesis).