Measurment and survey research: ch5&7 Flashcards

1
Q

Level of Measurement

A

The relationship between numerical values on a measure. There are different types of levels of measurement (nominal, ordinal, interval, ratio) that determine how you can treat the measure when analyzing it. For instance,
it makes sense to compute an average of an interval or ratio variable but does not for a nominal or ordinal one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Nominal Level of Measurement Measuring

A

A variable by assigning a number arbitrarily in order to name it numerically so that it might be distinguished from other objects. The jersey numbers in most sports are measured at a nominal level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Ordinal Level of Measurement

A

Measuring a variable using rankings. class rank is a variable measured at an ordinal level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Interval level of Measurement

A

Measuring a variable on a scale where the distance between numbers is interpretable. For instance, temperature in Fahrenheit or celsius is measured on an interval level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Ratio Level of Measurement

A

Measuring a variable on a scale where the distance between numbers is interpretable and there is an absolute zero value. For example, weight is a ratio measurement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A theory that maintains that an observed score is the sum of two components: true ability (or the true level) of the respondent; and random error.

A

True Score Theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Random Error

A

A component or part of the value
of a measure that varies entirely by chance. Random error adds noise to a measure and obscures the true value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

systematic error

A

A component of an observed score that consistently affects the responses in the distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Triangulate

A

combining multiple independent measures to get at a more accurate estimate of a variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

inter-rater or inter-observer Reliability

A

The degree of agreement or correlation between the ratings or coding’s of two independent raters or observers of the same phenomenon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Test-retest reliability

A

The correlation between scores on the same test or measure at two successive time points.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Parallel-forms reliability

A

The correlation between two versions of the same test or measure that were constructed in the same way, usually by randomly selecting items from a common test question pool.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

internal consistency reliability

A

A correlation that assesses the degree to which items on the same multi-item instrument are interrelated. The most common forms of internal consistency reliability are the average inter-item correlation, the average item-total correlation, the split half correlation and cronbach’s Alpha.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Cohen’s Kappa

A

A statistical estimate of inter-rater agreement or reliability that is more robust than percent agreement because it adjusts for the probability that some agreement is due to random chance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Average inter-item correlation

A

An estimate of internal consistency reliability that uses the average of the correlations of all pairs of items.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Average item-total correlation

A

An estimate of internal consistency reliability where you first create a total score across all items and then compute the correlation of each item with the total. The average inter-item correlation is the average of those individual item-total correlations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Split half reliability

A

An estimate of internal consistency reliability that uses the correlation between the total score of two randomly selected halves of the same multi-item test or measure.

18
Q

Cronbach’s Alpha

A

one specific method of estimating the internal consistency reliability of a measure. Although not calculated in this manner, cronbach’s Alpha can be thought of as analogous to the average of all possible split-half correlation.

19
Q

Translation validity

A

A type of construct validity related to how well you translated the idea of your measure into its operationalization.

20
Q

criterion-related validity

A

The validation of a measure based on
its relationship to another independent measure as predicted by your theory of how the measures should behave.

21
Q

Face validity

A

A validity that checks that “on its face” the operationalization seems like a good translation of the construct.

22
Q

Content validity

A

A check of the operationalization against the relevant content domain for the construct.

23
Q

concurrent validity

A

An operationalization’s ability to distinguish between groups that it should theoretically be able to distinguish between.

24
Q

Convergent validity

A

The degree to which the operationalization is similar to (converges on) other operationalizations to which it should be theoretically similar.

25
Q

Discriminant validity

A

The degree to which the operationalization is not similar to (or diverges from) other operationalizations that it theoretically should not be similar to.

26
Q

threats to construct validity

A

Any factor that causes you to make an incorrect conclusion about whether your operationalized variables (e.g., your program or outcome) reflect well the constructs they are intended to represent.

27
Q

mono-operation bias

A

A threat to construct validity that occurs when you rely on only a single implementation of your independent variable, cause, program, or treatment in your study.

28
Q

mono-method bias

A

A threat to construct validity that occurs because you use only a single method of measurement.

29
Q

hypothesis guessing

A

A threat to construct validity and a source of bias in which participants in a study guess the purpose of the study and adjust their responses based on that.

30
Q

dichotomous response format

A

A question response format that allows the respondent to choose between only two possible responses.

31
Q

nominal response format

A

A response format that has a number beside each choice where the number has no meaning except as a placeholder for that response.

32
Q

Ordinal response format

A

A response format in which respondents are asked to rank the possible answers in order of preference.

33
Q

interval level response format

A

A response measured using numbers spaced at equal intervals where the size of the interval between potential response values is meaningful. An example would be a 1-to-5 response scale.

34
Q

Likert type response scale

A

A response format where responses are gathered using numbers spaced at equal intervals.

35
Q

filter or contingency question

A

A question you ask the respondents to determine whether they are qualified or experienced enough to answer a subsequent one.

36
Q

double barreled question

A

A question in a survey that asks about two issues but only allows the respondent a single answer. For instance, the question “What do you think of proposed changes in benefits and hours in your workplace?” asks simultaneously about two issues but treats it as though they are one.

37
Q

response bracket

A

A question response format that includes groups of answers, such as between 30 and 40 years old, or between $50,000 and $f 00,000 annual income.

38
Q

response format

A

The format you use to collect the answer from the respondent.

39
Q

structured response format

A

Provides a specific format for the respondent to choose their answer. For example, a checkbox question lists all of the possible responses.

40
Q

multi-option or multiple response variable

A

A question format in which the respondent can pick multiple variables from a list.

41
Q

single option variable

A

A question response list from which the respondent can check only one response.

42
Q

unstructured response format

A

a response format that is not predetermined and that allows the respondent or interviewer to determine how to respond. An open-ended question is a type of unstructured response format.