Chapter 5 - Measurement Flashcards

1
Q

Types of Measures

A

Self-report measure
- A statement or series of answers to questions that an individual provides about his or her state, feelings, thoughts, beliefs, past behaviors, and so forth.

Physiological measures
- Any of a set of instruments that convey precise information about an individual’s bodily functions, such as heart rate, skin conductance, skin temperature, cortisol level, palmar sweat, and eye tracking.

Behaviour measures
- The systematic study and evaluation of an individual’s behavior using a wide variety of techniques, including direct observation, interviews, and self-monitoring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Neuroimaging

A

The use of various technologies to noninvasively study the structures and functions of the brain.

These technologies include:
* magnetic resonance imaging
* functional magnetic resonance imaging
* Diffusion-weighted magnetic resonance imaging
* computed tomography
* positron emission tomography

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Sustained Attention to Response Task

A

The Sustained Attention to Response Task is a computer-based go/no-go task that requires participants to withhold behavioral response to a single, infrequent target presented amongst a background of frequent non-targets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Classical Test Theory

A

The theory that an observed score (e.g., a test result) that is held to represent an underlying attribute may be divided into two quantities:
1. the true value of the underlying attribute
2. the error inherent to the process of obtaining the observed score.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

True Score

A

That part of a measurement or score that reflects the actual amount of the attribute possessed by the individual being measured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Measurement Error

A

Any difference between an observed score and the true score.

Measurement error may arise from flaws in the assessment instrument, mistakes in using the instrument, or random or chance factors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Random Error

A

Error that is due to chance alone.

Random errors are nonsystematic and occur arbitrarily when unknown or uncontrolled factors affect the variable being measured or the process of measurement.

Such errors are generally assumed to form a normal distribution around a true score.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Systematic Error

A

Error in which the data values obtained from a sample deviate by a fixed amount from the true values within the population.

Systematic errors tend to be consistently positive or negative and may occur as a result of sampling bias or measurement error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Reliability

A

The trustworthiness or consistency of a measure, that is, the degree to which a test or other measurement instrument is free of random error, yielding the same results across multiple applications to the same sample.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Internal Consistency Reliability

A

The degree of interrelationship or homogeneity among the items on a test, such that they are consistent with one another and measuring the same thing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Inter-Rater Reliability

A

The extent to which independent evaluators produce similar ratings in judging the same abilities or characteristics in the same target person or object.
- It often is expressed as a correlation coefficient = R

It consistency is high, a researcher can be confident that similarly trained individuals would likely produce similar scores on a target of the same kind.

If consistency is low, there is little confidence that the obtained scores could be produced with a different set of raters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Test-Retest Reliability

A

A measure of the consistency of results on a test or other assessment instrument over time, given as the correlation of scores between the first and second administrations.

It provides an estimate of the stability of the construct being evaluated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Inter-Method Reliability

A

A measure of the consistency and freedom from error of a test as indicated by a correlation coefficient obtained from responses to two or more alternate form of the test

Also called alternate-forms or parallel-forms reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Validity

A

The degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of conclusions drawn from some form of assessment.

Validity has multiple forms, depending on the research question and on the particular type of inference being made.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Construct Validity

A

The degree to which a test or instrument is capable of measuring a concept, trait, or other theoretical entity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Face Validity

A

The apparent soundness of a test or measure.

The face validity of an instrument is the extent to which the items or content of the test appear to be appropriate for measuring something, regardless of whether they actually are.

17
Q

Content Validity

A

The extent to which a test measures a representative sample of the subject matter or behavior under investigation.

A form of construct validity evaluated by comparing the content of the measure to the theoretical definition of the construct, ensuring that all aspects of the construct are measured and no extraneous elements are also measured.

If it looks like a duck, swims like a duck, and quacks like a duck, is it a duck?

18
Q

Predictive Validity

A

Evidence that a test score or other measurement correlates with a variable that can only be assessed at some point after the test has been administered or the measurement made.

19
Q

Concurrent Validity

A

The extent to which one measurement is backed up by a related measurement obtained at about the same point in time.

In testing, the validity of results obtained from one test can often be assessed by comparison with a separate but related measurement collected at the same point in time.

20
Q

Convergent Validity

A

The extent to which responses on a test or instrument exhibit a strong relationship with responses on conceptually similar tests or instruments.

21
Q

Discriminant Validity

A

The degree to which a test or measure diverges from (i.e., does not correlate with) another measure whose underlying construct is conceptually unrelated to it.

22
Q

Key Points: Reliability and Validity

A
  1. Reliability alone is necessary (but not sufficient) to establish construct validity
  2. Construct validity is not necessary to establish reliability
  3. Reliability and indicators of construct validity (face, content, predictive, etc.) are both necessary to establish construct validity