week 7- rigour and research Flashcards
(38 cards)
rigour in research
- more rigour = more generalizable/transferable
- rigour is the quality, believability and trustworthiness of the study findings
- can be determined by validity and reliability of measurement tools
validity
measures what is intended to be measured
reliability
provides consistent results
components of observed scores
true variance (data) and error variance (random or systematic errors)
reliability coefficient
- expresses the relationship between the error variance, true variance and observed score
- can range from 0-1 (0= no relationship, 1= perfect relationship)
what is a desirable reliability coefficient?
> 0.70 indicates consistency and dependability of measurement tool
correlation
- statistical technique used to measure and describe a relationship between two variables
- the correlation coefficient (r) describes the strength and direction of the relationship
what is a desirable correlation coefficient?
> (+/-) 0.7
components of reliability
stability, consistency and equivalence
stability
an instrument is stable when repeated administration of the instrument yields the same results
how is stability measured?
test-retest reliability
consistency
all tools measure the same concept or characteristic
how is consistency measured?
chronbach’s alpha
equivalence
consistency or agreement among observers using the same measurement tool or agreement among alternative forms of a tool
how is equivalence measured?
interrater reliability
test retest reliability
- the stability of the scores of an instrument when it’s administered more than once to the same participants under similar conditions
- score from repeated testing
- comparison expressed as a correlation coefficient (pearson’s r)
chronbach’s alpha
- most commonly used test of internal consistency
- each item in the scale is simultaneously compared with the others and a total score is used to analyze the data
- many tools used to measure psychosocial variables and attitudes have a likert-type scale response format, which is suitable for testing internal consistency
what is a desirable chronbach’s alpha?
0.8-0.9<
interrater reliability
- consistency of observations between two or more observers with the same tool
- used with direct measurements of observed behaviour
- important for minimizing bias
ie. cohen’s kappa
cohen’s kappa
- a coefficient of agreement between two raters
- a cohen’s kappa of 0.8 or better is generally assumed to indicate good interrater reliability
components of validity
content validity, criterion validity and construct validity
content validity
refers to the degree to which the content of the measure represents the universe of content or the domain of a given behaviour (how well it covers all aspects of a concept)
ie. face validity
face validity
- panel of judges indicate their level of agreement with the scope of the items and the extent to which the items reflect the concept under consideration
- relevancy and accuracy
criterion-related validity
consists of concurrent and predictive validity