exam II Flashcards
(58 cards)
reliability
consistency of a measure; increases as number of items/observations increases
2 components of all measures
true score
measurement error
assess reliability with…
pearson r coefficient
test-retest reliability
measuring same individuals at two points
issues with test-retest reliability?
artificially inflated correlation, some variables are meant to change
internal consistency reliability
consistency among items within a measure, uses responses at only one time point
split-half reliability
correlates scores from one half of measure with scores on other half
Spearman-brown split half reliability coefficient (reliability corrected)
cronbach’s alpha reliability
data on individual items!
correlating each item to every other item in the scale
α = average inter-item correlation
item-total correlations
data on individual items!
correlating each item score with the total score
helps eliminate items that are less internally consistent
interrater reliability
extent to which raters agree in their observations
cohen’s kappa
operational definition is key!
construct validity
is the operational definition adequate?
does the test measure what it is supposed to measure?
face validity
measure appears “on the face of it” to measure what it is supposed to
not very sophisticated
content validity
comparing content of measure with reality/definition of construct
predictive validity
does the measure predict future behavior?
concurrent validity
examines relationship between scores on a measure and criterion behavior measured at the same time
convergent validity
scores on the measure correlate well with scores on a another measure of the same construct
discriminant validity
measure is not related to variables in which it should not be related
can discriminate between the measure and other potentially related variables
qualitative approach
observation of behavior in natural setting or descriptions of world/participants
interviews, focus groups, open ended questions
quantitative approach
specific behavior can be counted
statistical analysis
surveys/observations with coding schemes
naturalistic observation issues
ethics of concealment
nonparticipant observer vs participant observer
naturalistic observation limitations
not always appropriate for well-defined hypotheses
population/time/resources/location difficult
systematic observation
observation of several specific behaviors in specific setting
behavior quantified with coding scheme
natural or lab setting
systematic observation coding system
system for rating behaviors of interest, usually for frequency or degree
establish interrater reliability (cohen’s kappa)
systematic observation limitations
equipment, reactivity, long periods of time better for data