Levels of Measurement
Nominal
Ordinal
Interval
Ratio
categorial or non metric measures (don’t have units of measure associated with them)
nominal and ordinal
metric measures (have units of measurement associated with them)
interval and ratio
What measure has a fixed zero point?
Interval
one object can be distinguished from another object
nominal
one object has more/less or is better/worse than another object but you don’t know by how much
ordinal
one object is so many units more/less than another object
interval
one object is so many times bigger, faster, heaver, than another object
ratio
can’t perform any mathematical operations on these measures
nominal
ordinal
you can perform addition and subtraction but not multiplication and division on this measure
interval
conceptual definition of a variable
describes what a construct means by relating it towards other abstract concepts
operational definition of a variable
the definition of a variable in terms of the actual procedures used by the researcher to measure and/or manipulate it
types of measurement error
systematic error
random error
measurement error definition
the component of the observed score that is the result of factors that distort the score from its true value
systematic error definition
constant in its effects over time - anything natural or manmade that causes scores to lean one way more than the other
random error
varies in its effects over time, no known explanation to allow for prediction of the pattern of scores
reliability
you get the same results over and over
methods of assessing reliability
test-retest alternative forms split half inter-rater statistical methods
test-retest
agreement when same test is taken at two different times
alternative forms
agreement between test scores of two tests that are parallel forms of each other (GMAT, AC, IQ)
split half
randomly divide items into two subsets and examine the consistency in total scores across the two
inter-rater (reliability)
level of agreement between different raters
statistical methods (reliability)
Cronbach’s alpha
Validity definition
the extent to which an instrument measures what you think it measures
types of validity
construct content face criterion predictive concurrent convergent discriminent
construct validity
does the manifest variable (the measure) accurately reflect the construct (the true value
content validity
degree to which measure is representative of the domain it is designed to cover
face validity
how well the measure appears (at face value) to measure what it’s supposed to
criterion validity
relationship between performance on one measurement and performance on another measure
predictive validity
ability of operationalization to predict what it’s theoretically supposed to predict
concurrent validity
two measures assessed at the same time, if there is a high correlation between the two it has high concurrent validity
convergent validity
extent to which a measure it should theoretically be associated with (ex. ACT and SAT scores should measure the same thing)
discriminant validity
extent to which a measure doesn’t correlate with measures it shouldn’t be associated with (SAT scores and knitting speed shouldn’t be correlated)