Chapter 3: Defining and Measuring Variables Flashcards

(24 cards)

1
Q

Theory

A

In the behavioural sciences, statements about the mechanisms underlying a particular behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Constructs (hypothetical constructs)

A

Hypothetical attributes or mechanisms in a theory that help the theory explain and predict behaviour. Also known as hypothetical constructs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Operational definition

A

A procedure for indirectly measuring and defining a variable that cannot be observed or measured directly. An operational definition specifies a measurement procedure (a set of operations) for measuring an external, observable behaviour and uses the resulting measurements as a definition and a measurement of the hypothetical construct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Validity

A

The degree to which the measurement process measures the variable it claims to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Ace validity

A

An unscientific form of validity that concerns whether a measure superficially appears to measure what it claims to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Concurrent validity

A

The type of validity demonstrated when scores obtained from a new measure are directly related to scores obtained from a more established measure of the same variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Predictive validity

A

The type of validity demonstrated when scores obtained from a measure accurately predict behavior according to a theory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Construct validity

A

The type of validity demonstrated when scores obtained from a measurement behave exactly the same as the variable itself. Construct validity is based on many research studies and grows gradually as each new study contributes more evidence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Convergent validity

A

The type of validity demonstrated by a strong relationship between the scores obtained from two different methods of measuring the same construct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Divergent validity

A

A type of validity demonstrated by using two different methods to measure two different constructs. Convergent validity then must be shown for each of the two constructs. Finally, there should be little or no relationship between the scores obtained for the two different constructs when they are measured by the same method.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Reliability

A

The degree of stability or consistency of measurements. If the same individuals are measured under the same conditions, a reliable measurement procedure will produce identical or nearly identical measurements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Test-retest reliability

A

The type of reliability found by comparing the scores obtained from two sequential measurements of the same individuals and calculating a correlation between the two sets of scores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Parallel-forms reliability

A

The type of reliability established by comparing scores obtained by using two alternate versions of a measuring instrument to measure the same individuals and calculating a correlation between the two sets of scores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Inter-rater reliability

A

The degree of agreement between two observers who simultaneously record measurements of a behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Split-half reliability

A

A measure of reliability obtained by splitting the items on a questionnaire or test in half, computing a separate score for each half, and then measuring the degree of consistency between the two scores for a group of participants.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Ceiling effect

A

The clustering of scores at the high end of a measurement scale, allowing little or no possibility of increases in value.

17
Q

Floor effect

A

The clustering of scores at the low end of a measurement scale, allowing little or no possibility of decreases in value; a type of range effect.

18
Q

Experimenter bias

A

The influence on the findings of a study from the experimenter’s expectations about the study. Experimenter bias is a type of artifact and threatens the validity of the measurement as well as both internal and external validity.

19
Q

Single-blind

A

A research study in which the researcher does not know the predicted outcome for any specific participant.

20
Q

Double-blind

A

A research study in which both the researcher and the participants are unaware of the predicted outcome for any specific participant.

21
Q

Demand characteristics

A

Any potential cues or features of a study that (1) suggest to the participants what the purpose and hypothesis are, and (2) influence the participants to respond or behave in a certain way. Demand characteristics are artifacts and can threaten the validity of the measurement, as well as both internal and external validity.

22
Q

Reactivity

A

Participants’ modification of their natural behavior in response to the fact that they are participating in a research study or the knowledge that they are being measured. Reactivity is an artifact and can threaten the validity of the measurement as well as both internal and external validity.

23
Q

Laboratory

A

A research setting that is obviously devoted to the discipline of science. It can be any room or space that the subject or participant perceives as artificial.

24
Q

Field

A

Any research setting that the participant or subject perceives as a natural environment.