Chapter 5 - Identifying Good Measurements Flashcards

1
Q

define conceptual definition

A

researchers’ definition of a variable at the theoretical level, also known as a construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

define operational definition

A

researchers specific decision about how to measure or manipulate the conceptual variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is the process taken to conceptual variables?

A
  1. stating a definition of the construct
  2. create operational definition
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what are the 3 common types of measures?

A

self-report, observational and physiological

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

define self-report measures

A

a method of measuring a variable in which people answer questions about themselves in a questionnaire or interview. collect all measures to create an average for the persons responses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

define observational measures

A

a method of measuring a variable by recording observable behaviours or physical traces of behaviours

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

define physiological measures

A

a method of measuring a variable by recording biological data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what are the 2 levels of operational variables and how its measured?

A

categorical: a variable whose levels are categories (ex. male and female)

quantitative: a variable whose values can be recorded as meaningful numbers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what are the 3 kinds of quantitative variables?

A

ordinal, interval and ratio scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

define ordinal scale

A

a scale whose levels represent a ranked order and in which distances between levels are not equal

ex. order of when people finish a race

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

define interval scale

A

scale that has no true zero and in which the numerals represent equal intervals between levels

ex. shoe size (no one can have a 0 shoe size)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

define ratio scale

A

scale in which the numerals have equal intervals and the value of zero means non of the variable being measured

ex. measuring how many people get items right on a test. zero means getting nothing right

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

define reliability

A

consistency of the results of a measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

define validity

A

appropriateness of a conclusion or decision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what are the 3 types of reliability?

A

test-retest, interrater, and internal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

define test-retest reliability

A

consistency in results every time a measure is used. applies for all measures but works best when it is a theoretical construct.

ex. taking an IQ test. if levels increase, it should be relatively consistent.

17
Q

define interrater reliability

A

the degree to which two or more coders or observers give consistent ratings of a set of targets. most relevant for observational measures.

ex. counting how many times a child smiles in an hour. you and the other observer get the same number

18
Q

define internal reliability

A

in a measure that has several items, the consistency in a pattern of answers, no matter how a question is phrased.

ex. having a 5 item scale where each are worded differently but meant to measure the same thing, the results should correlate

19
Q

define correlation coefficient

A

r, ranged from -1.0 to +1.0 which indicates the strength and direction of an association between two variables.

20
Q

define average inter-item correlation

A

AIC, is a measure of internal reliability for a set of items. it is the mean of all possible correlations computed between each item and the others

Cronbach’s alpha reflects the average on the inter-item scale. (low = not consistent, high = reliability)

21
Q

define face validity

A

the extent to which a measure is subjectively considered a plausible operationalization of a conceptual variable in question.

appear to measure for someone who knows about the content

22
Q

define content validity

A

the extent to which a measure captures all parts of a defined construct

23
Q

define criterion validity

A

empirical form of measuring validity measuring association with a behavioural outcome. comparing when you already know a measure for a similar thing

ex. having goof high school grades should correlated with high grades at uni

24
Q

define know-group paradigms

A

method to make sure you have criterion validity, where a research test 2 or more groups (know to have different variables of interest) to ensure there is a difference within the variable.

25
Q

define convergent validity

A

empirical test of the extent to which a self-report measure correlates with other measures of a similar construct. what measure should correlate with it?

26
Q

define discriminant validity

A

test where self-report measure doesn’t correlate with a measure that is a dissimilar construct. what measure should not correlation with it?

27
Q

when is observational better than self-report and vice-versa?

A

Observational is better when participants are in situations when they would be influenced to change response by researcher.

Self-report is better when it comes to internal attributes that cannot be observed like self-esteen.

28
Q

what are teh two subjective validities?

A

face and content

29
Q

what are the 3 objective valdities?

A

criterion, convergent, discriminant

30
Q

is reliability necassary?

A

yes it is necassary but not sufficient for validity