Chpt 5: Measurement Concepts (PSY302) Flashcards
Def: reliability
the consistency or stability of a measure.
In order for something to be _______, it needs to not _______ from one reading to the next.
reliable, fluctuate
Def: true score
the real, or true value on a given variable
Def: measurement error
the degree to which a measurement scores deviates from the true score value.
An unreliable measure of intelligence has considerable measurement of error.
To test this, it’s in greater variability by the person to whom the unreliable test was administered.
Reliability is most likely to be achieved when researchers use __________ measurements in their procedures.
careful
In order to ________ reliability, you must make __________ measures.
increase, multiple
Def: Pearson Product-Moment Correlation Coefficient
a common method of calculation a correlation coefficient used with ratio & interval scale data.
What is the Pearson Product symbol?
r
Def: test-retest reliability
assessed by measuring the same ppl at 2pts in time & comparing the results.
A high correlation between test & retest indicates reliability.
Def: alternate forms reliability
uses 2 forms of the same test given to the same ppl at 2 points in time.
This avoids issues with participants remembering & repeating earlier responses.
If many participants have 2 very similar scores, you can conclude that the measure reflects ________ scores instead of measurement _______.
true, error
Def: internal consistency reliability
the assessment of reliability using responses at only 1 pt in time.
Bc all items measure the same variable, they should yield similar or consistent results.
Def: split-half reliability
the correlation of the total score on ½ of the test with the total score on the other.
Both halves are created by randomly dividing items into 2 groups.
What’s one con to split-half reliability?
One con of this is that this is based on only 1 of many possible ways of dividing the measures into halves.
A _______ correlation indicates that the questions on the test are measuring the _______ thing.
high, same
Def: item-total correlation
provides info abt each individual value.
A large number of correlation coefficients are produced.
provides info abt each individual value.
Def: interrater reliability
the extent to which raters agree in their observations.
A high correlation indicates raters agree in their ratings.
A commonly used indicator is Cohen’s kappa.
How to Test-retest Reliability:
A measure is taken 2x. The correlation of a score at time 1 with the score at time 2 represents test-retest reliability.
The correlation between 2 versions of a measure is called the alternative forms reliability.
Def: Cronbach’s Alpha
correlation of each item on the measure with every other item on the measure. Also known as the Cronbach’s Alpha reliability coefficient
What’s one thing that reliability doesn’t account for?
It doesn’t tell us whether we have a good measure of the variables of interest.
Def: face validity
the content of the measure appears to reflect the construct being measured.
The simplest way to argue that a measure is valid is to suggest that the measure appears to assess the intended variable accurately by face validity.
Face validity isn’t ___________; it involves only a judgment of whether given the theoretical def of the variable, the content of the measure appears to actually _________ the variable.
sophisticated, measure
Def: content validity
the content of the measure is linked to the universe of content that defines the construct.
Ex: depression would have content that links to each of the single symptoms that would be defined in the DSM5.
Both face & content validity focuses on ________ whether the content of a measure _________ the meaning of the construct being measured.
assessing, reflects