Scale development and psychometric properties Flashcards

1
Q

What types of variables to we measure in psychological research?

A

Behavior: observable and measurable, objective
Construct: concept of interest, can measure related variables, subjective

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Can we trust data from an inconsistent measure?

A

No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Pearson’s r

A

Correlation coefficient: Used to observe reliability of a measure. Should be .8+ (.9 is gold standard, which indicates 10% error)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

test-retest

A

A type of reliability test in which you have the same person complete the same measure twice and then use r to evaluate the relationship. The higher the coefficient, the more reliable the measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Alternate forms

A

A type of reliability test in which you create two versions of same measure, and administer each version to same participants, then use r to evaluate the relationship

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Split-half

A

A type of reliability test in which you randomly separate one measure into two parts- administer both parts to same participants and then use r to evaluate the relationship

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Cronbach’s alpha

A

Evaluates internal consistency of a measure

  • calculates all possible split-half configurations
  • .7 acceptable, .9 gold standard
  • scores range from 0-1, like r
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Factors that affect reliability

A
test length
homogeneity of items
test-retest interval
variability of scores
variation in test situation
sample size
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

interrater reliability

A

Used to measure the evaluation of a subjective measure. Two independent raters use a measure for the same observation, then evaluate consistency with r
Especially common in SCD and academic intervention research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

reliability=

A

consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the two issues in validity?

A

What a test measures, and how well it measures it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Is validity all or nothing?

A

No, there are degrees of validity and it requires ongoing consideration because what we know about a construct changes and “normal” functioning changes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How does Pearson’s r relate to validity?

A

We can use r to evaluate how valid a measure is compared to other measures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Convergent construct validity

A

Compares measure to other similar measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Discriminant Construct validity

A

does not correlate with a measure of logically unrelated construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Predictive validity

A

The measure correlates with related future outcomes (Ex: GRE- compare GRE score to measure of grad school success)

17
Q

Factors that affect validity

A

reliability
range of attributes being measured
length of interval between administration of test and criterion measure
range of variability in criterion measure
individual validity

18
Q

Why is it important to be critical when analyzing validity of measures?

A

Data you collect are interpreted to make high stakes decisions for individuals you serve. The data are only as good as the measures used to collect them.

19
Q

What is the difference between reliability and validity?

A

Consistency v. measuring what you intend to measure

20
Q

Considerations of survey development

A

the effects of how you word questions
the order of your questions
question format: qualitative or quantitative

21
Q

Guidelines for survey questions

A
  • each question should be short and clearly worded
  • as few questions as possible
  • control questions
  • keep it simple, specific, exhaustive, individual (make sure questions ask only one thing), optional (allow skips), neutral and balanced (response options should be weighted around neutrality if using a scale)
22
Q

What types of survey response types should you use?

A

Stay away from open response; use forced choice as multiple choice or rating items

23
Q

Considerations for rating scales

A
  • consider validity of items
  • items should be short and clearly worded
  • some control items
  • no open response
  • scale should be the same for all items (Likert type is most common, can also be descriptive items across a similar scale, like the BDI)
24
Q

What is the research finding connecting autism with vaccines an example of?

A

Type I error

25
Q

What is the difference between reliability and validity?

A

Validity indicates what a measure measures and how ell it measure it. Reliability indicates the consistency of a measure.

26
Q

What is an acceptable coefficient when evaluating the psychometric properties of a measure?

A

.8