Quantitative Approaches Flashcards

1
Q

What are the essential steps in measurement?

A

Define the construct to be measured
- A construct is an abstract idea, theme, or subject matter that a researcher wants to
measure. Because it is initially abstract, it must be defined.
Operationalization of the construct through formal instrumentation
Once you have defined the construct, you must determine the procedure/instrument
for measuring, or operationalizing, the construct.
The researcher will use goniometric measurements to document the pts’ AROM before and after treatment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the different scales of measurement?

A

NominalScales
OrdinalScales
IntervalScales
RatioScales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are nominal scales?

A

A nominal scale is used to categorize characteristics of subjects.
Ex: Gender would be assigned to a category by a number. Females may be
categorized as “1” and males as “2”
This type of scale has no other function other than to classify or categorize subjects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are ordinal scales?

A

Used to classify ranked categories
These are numbers assigned to rank or quantify an observation, behavior, etc. that has
no true “quantity”
The intervals between the ranks are not necessarily the same
Ordinal scales can be subjected to mathematical operations. They are treated like interval scales.
A good example of an ordinal scale is the FIM scale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are interval scales?

A

Interval scales have equal distances between units of measurement
Allows the researcher to determine relative difference
Do not contain an absolute true zero point that indicates the absences of a characteristic. If there is a zero point identified in an interval scale, it has been randomly assigned.
Example of an interval scale is a calendar year.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are ratio scales?

A

Demonstrate equal distances between units of measurement and they have an absolute zero point.
They indicate absolute amounts of measure.
All forms of mathematical and statistical operations can be performed with a ratio scale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a measurement error?

A

There is almost always some error in measurement. Measurement error is the general degree of error present in measurement.
For example: Goniometric measurements taken by several different therapists will yield various results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are two types of measurement error?

A

Systematic error
Random error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is systematic error?

A

These are predictable errors.
Occurs when the instrument you are using either overestimates or underestimates the
true score in one direction (consistently overestimates or underestimates)
We can adjust for these errors, so they don’t pose a threat to reliability, but may pose a threat to validity because it is an error that is taking place on a consistent basis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is random error?

A

These errors occur by chance and can affect a subject’s score in an unpredictable manner.
Factors that can contribute to random errors are, but not limited to:
- Fatigue of the subject
- Environmental influences
- Inattention of the subject or rater

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can you reduce measurement error?

A

We can use a standardized instrument as a measure
A standardized instrument uses a specific process or protocol for administering the assessment.
We can train the raters (testers) to carry out the assessment process in a specified manner by adhering to the protocol.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is reliability?

A

the degree of consistency with which an instrument or rater measures a variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a reliability coefficient?

A

The ratio of the true score variance to the total variance observed on an assessment
It ranges from 0.0 to 1.0, with 1.0 indicating there is no error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Empirical evaluation of an assessment is used to determine…

A

reliability of an assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The assessment is empirically evaluated through what methods?

A

Test-retest reliability
Split-half reliability
Alternate forms of equivalency reliability
Internal consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is test-retest reliability?

A

A metric indicating whether an assessment provides consistent results when it is administered on two different occasions
When an instrument or assessment consistently provides the researcher with same results, we can say that there is test-retest reliability.
Test—retest reliability is calculated based on the two administrations of the assessment; the “Time 1 score” is the first variable, and the “Time 2 score” is the second variable

17
Q

What is split half reliability?

A

This is a technique used to assess the reliability of questionnaires:
Assessment items are divided into two smaller sections (usually by dividing them into odd and even items, or first half and last half)
Scores are then correlated from the two halves of the assessment.

18
Q

What are parallel forms of reliability?

A

When there are multiple versions of the same test, it is important to determine if each version of the test will provide consistent results.
In this measure of reliability, an assessment’s alternative forms are administered to subjects at the same time and then scores are correlated from the two forms of the assessment
A counterbalanced design might be utilized for testing

19
Q

What is internal consistency?

A

This is the extent to which the items that make up an assessment covary or correlate with each other. This may be referred to as the homogeneity of the assessment.
Do all items in the assessment accurately measure a construct
Internal consistency and construct validity are closely related.
If an assessment has a high internal consistency, we can assume that there is construct validity as well.

20
Q

What are rater/observer effects on reliability?

A

There are two sources of observer/rater error that are typically examined:
- Observer presence and characteristics – The presence of the rater may impact the behavior of the subjects (The Hawthorne effect)
- Rater bias – Bias may be introduced when one rater takes two or more measurements of the same item. The rater may be biased by remembering the score on the subject’s previous attempt/performance.

21
Q

What is inter-rater reliability?

A

When you have two or more raters who are assigning scores based on subject observation, there may be variations in the scores.
All raters should observe a single trial, either simultaneously or via video recordings of the subject’s performance.
Raters should not compare or collaborate on scores.
We would like to see agreement in the scores from all raters.

22
Q

What is validity?

A

Validity of a measurement means that the instrument being used measures what it is supposed to measure.

23
Q

What are the different types of validity?

A

Face validity
Content validity
Criterion validity
Construct validity

24
Q

What is face validity?

A

The assumption of validity of a measuring instrument based on its appearance as a reasonable measure of a given variable
An assessment has the appearance of measuring an underlying construct
The weakest evidence of validity; used alone, it is insufficient to demonstrate the
validity of an assessment
Face validity can be helpful in deciding whether or not to use an assessment. It may help to determine if an assessment is relevant to what we want to measure.

25
Q

What is content validity?

A

A type of measurement validity – the degree to which the items in an instrument adequately reflect the content domain being measured
Content validity is the adequacy with which an assessment is able to capture the construct it aims to measure
It is concerned with whether the items that make up the assessment adequately are relevant to the variables that are being measured.
Content validity is also concerned that irrelevant content be excluded from the assessment

26
Q

What is criterion validity?

A

The ability of an assessment to produce results that are in agreement with or predict a known criterion assessment or known variable.
It is important to select a criterion assessment for comparison that is recognized and demonstrated to have good reliability and validity. You compare your results to a test that is considered to be the “gold standard” assessment for that criterion

27
Q

Criterion validity includes what 2 types of evidence?

A
  1. Concurrent validity- the degree to which the outcomes of one test correlate with
    outcomes on a criterion test, when both are given at the same time
  2. Predictive validity- an instrument is used to predict some future performance
28
Q

What is construct validity?

A

Does the assessment measure the construct that it is intended to measure?
This is the ultimate objective of all forms of empirically assessing validity
A type of measurement validity in which the degree of a theoretical construct is measured