Exam 2 Objectives Flashcards

1
Q

Distinguish among continuous, discrete, and dichotomous variables.

A

Continuous variables: can theoretically have any value along a continuum within a defined range.
- Ex: weight in pounds
Discrete variables: can only be described in whole integer units.
- Ex: HR in BPM
Dichotomous variable: can only take on two values.
- Ex: yes or no on a survey

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Discuss the challenge of measuring constructs.

A

Measuring constructs are subjective, abstract variables.
They are measured according to expectations of how a person who possesses the specified trait would behave, look, or feel in certain situations.
Only reflects something w/ in a person, does not exist as an externally observable event (latent trait)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define and provide examples of the four scales of measurement.

A

Nominal: category/classifications
- Ex: blood type, gender, dx
Ordinal: numbers in rank order, inconsistent/unknown intervals. Based on greater than, less than
- Ex: MMT, function, pain
Interval: numbers have rank orders and equal intervals, but no true zero. Can be added or subtracted, cannot be used to interpret actual quantities.
- Ex: Fahrenheit vs Celsius, shoe size
Ratio: numbers represent units w/ equal intervals measured from true zero
- Ex: Height, weight, age

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Discuss the relevance of identifying measurement scales for statistical analysis.

A

Determination of which mathematical operations are appropriate.
Determination of which interpretations are meaningful.
Statistical procedures
- Parametric tests: apply mathematical manipulations, requiring interval or ratio data… Mean, median, mode.
- Nonparametric tests: do not make the same assumptions and are designed to be used with ordinal and nominal data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Discuss the importance of reliability in clinical measurement.

A

Relative reliability: Reflects true variance as a proportion of total variance in a set of scores.
Intraclass correlation coefficients (ICC) and kappa coefficients are commonly used.
Absolute reliability: Indicates how much of a measured value, expressed in the original units, is likely due to error.
Most commonly uses standard error of measurement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define reliability in terms of measurement error.

A

Classical measurement theory: any observed score involves true score (fixed value) and unknown error component (small or large)
Difference between true score and observed value = measurement error
Observed score = true score ± error component

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Distinguish between random and systematic error.

A

Random errors are a matter of chance, possibly arising from factors such as examiner or subject inattention, instrument imprecision, or unanticipated environmental fluctuation.
- Can occur through the measuring instrument itself for example: Imprecise instruments or environmental changes affecting instrument performance can also contribute to random error.
Systematic errors: are predictable errors of measurement. They occur in one direction, constantly overestimating or underestimating the true score.
- Systematic errors are consistent. Consequently, systematic errors are not a threat to reliability. Instead, systematic errors only threaten the validity of a measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describe typical sources of measurement error.

A

The person taking the measurements — the raters.
The measuring instrument.
Variability in the characteristic being measured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe the effect of regression toward the mean in repeated measurement.

A

Regression toward the mean (RTM) = statistical phenomena when extreme scores are used in the calculation of measured change. Extreme scores on an initial test are expected to move closer (or regress) toward the group average (mean) on a second test.
RTM, if not considered, can interfere when researchers try to extrapolate results observed in a small sample to a larger population of interest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Discuss how concepts of agreement and correlation relate to reliability.

A

Through interrater agreement and correlation, a study is likely to be more reliable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define and provide examples of test-retest, rater, and alternate forms reliability.

A

Test-retest: determines the ability of an instrument to measure subject performance consistently.
Test-retest intervals: time intervals between tests must be considered.
Carryover: carryover influenced by practice or learning during the initial trial alters performance on subsequent trails.
Testing effects: when the test itself is responsible for observed changes in a measured variable.
Rater reliability: training and standardization may be necessary for rater(s); the instrument and the response variable are assumed to be stable so that any differences between scores on repeated tests can be attributed solely to rater error.
Intrarater: stability of data recorded by one tester across two or more trials.
Interrater: two or more raters who measure same subject
Alternate forms of reliability: also called equivalent or parallel forms; assesses the differences between scores to determine whether they agree; used as an alternative to test-retest reliability when the intention is to derive comparable versions of a test to minimize the threat posed when subjects recall their responses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Discuss how generalizability theory influences the interpretation of reliability.

A

Reliability exists in a context: Relevant to a tool’s application.
Reliability is not all-or-none: Exists to some extent in any instrument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Discuss how reliability is related to the concept of minimal detectable difference.

A

Minimal detectable change (MDC): the amount of change that goes beyond error.
GREATER RELIABILITY = SMALLER THE MDC
MDC is based on the standard error of the measurement (SEM)
SEM: The most used reliability index, it provides a range of scores within which the true score for a given test is likely to lie.
MDC is also known as minimal detectable difference, smallest real difference, smallest detectable change, coefficient of repeatability, or the reliability change index.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Discuss the importance of validity in clinical measurement.

A

Validity relates to the confidence we have that our measurement tools are giving us accurate information about a relevant construct so that we can apply results in a meaningful way.
Used to measure progress toward goals and outcomes.
Needs to be capable of…
- discriminating among individuals w/ and w/o certain traits, dx, or conditions
- evaluating magnitude/quality of variable
- creating accurate predictions of patient future

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Define and provide examples of face, content, criterion-related, and construct validity.

A

Face validity: Implies that an instrument appears to test what is intended to test.
- Judgment by the users of a test after the test is developed.
Content validity: establishing multiple items make up a sample, or that scale adequately samples the universe of the content that defines the construct being measured.
- The items must adequately represent the full scope of the construct being studied.
- The number of items that address each component should reflect the relative importance of that component.
- The test should not contain irrelevant items.
Criterion-related validity: Comparison of the results of a test to an external criterion
- Index test AND Gold or reference standard as the criterion
- Two types:
* Concurrent validity (test correlates w/ reference standard at same time)
* Predictive validity
Construct validity: Reflects the ability of an instrument to measure the theoretical dimensions of a construct; establishes the correspondence b/w a target test and a reference or gold standard measure of the same construct.
- Assessing presence of a latent trait
- Methods of construct validation
* Known-groups method, Convergence and divergence, Factor analysis.
* Convergent validity: extent to which a test correlates w/ other tests of closely related structure
* Discriminant validity: extent to which a test is uncorrelated w/ tests of distinct or contrasting constructs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Discuss issues affecting validity of measuring change.

A

Change scores used to:
- Demonstrate effectiveness of an intervention
- Track the course of a disorder over time.
- Provide a context for clinical decision making.
Concern: the ability of an instrument to reflect changes at extremes of a scale. Floor effect = not being able to see difference in scores on an instrument if the participants score is already low

17
Q

Define minimal clinically important change.

A

Minimal clinically important difference (MCID) — The smallest difference that signifies an important difference in a patient’s condition.
Reflects tests validity, can be helpful when choosing an instrument, setting goals, and determining treatment success.

18
Q

Distinguish between criterion and norm referencing.

A

Norm-referenced test: standardized assessment designed to compare and rank individuals within a defined population.
Criterion-referenced test: interpreted according to a fixed standard that represents an acceptable level of performance.

19
Q

Describe the role of surveys in clinical research.

A

A set of questions that elicits quantitative or qualitative responses.

20
Q

Describe the basic structure of survey instruments.

A

Questionnaires: a standardized survey, usually self-administered, that asks individuals to respond to a series of questions.
Interviews: the researcher asks respondents specific questions and records the answers.
- structured, semi-structured, unstructured

21
Q

Describe the process of designing a survey.

A

Question
Review literature
Questions and hypothesis
Content development
Using existing instruments
Expert review of draft questions
Pilot testing
Revisions

22
Q

Discuss the characteristics of good survey questions.

A

Open-ended: ask respondents to answer in their own words; useful in identifying feelings, opinions, and biases.
Closed-ended: ask respondents to select and answer from among several fixed choices.
- typical formats: multiple choice, 2 choices, check all that apply, 3-5 options, checklists, measurement scales, visual analog scales.
Every question should be answerable by every subject.
Questions should be easy to answer.
Consider recall of information.
Consider if respondents will be honest.
Try to use a variety of question types.
Questions should generate varied responses.

Question wording
- Purposeful language
- Avoid bias.
- Clarity
- Avoid double-barreled questions.
- Frequency and time-measures
- Sensitive questions