RM - Assessing Reliability and Validity Flashcards
(8 cards)
How to Assess Reliability of Experiments and Self Report Methods
TEST-RETEST
1. Ppts are given a task or measure to complete (CONTEXT)
2. The same ppts are then given the same task (CONTEXT) after a time delay of 2 weeks
3. Correlate the results from each test (CONTEXT) using stats test
4. A strong +ve correlation of above +0.8 shows high reliability
How to Assess Reliability of Observations
INTER-OBSERVER
1.The reliability of the observation can be checked by using 2 observers
The 2 observers would create and be trained on how to use the behaviour categories (CONTEXT)
2 observers would then conduct the observation separately - watch exactly the same behaviour (CONTEXT) for same amount of time (CONTEXT) but independently record their observations
The tallies from the 2 observers should be compared and correlated with an appropriate stats test
A strong +ve correlation of above +0.8 shows high reliability
How to assess face validity?
An independent psychologist in the same field looks at experimental conditions/questions/ behaviour categories to see if they look like they measure what they intend to (CONTEXT). If the researcher says ‘yes’ then the research is said to have face validity
How to assess concurrent validity?
Compare results of new experiment/ observation/questionnaire with the results from another similar prevalidated experiment/observation/questionnaire. Results from both tests are similar then we can assume that the test is valid. A strong +ve correlation of above +0.8 is said to have concurrent validity.
Assessing Reliability of Content Analysis using TEST RETEST
- Researcher completes content analysis by creating a series of coding categories and tallying each time it occurs within qual data
- Same researcher repeats the content analysis with the same qualitative data, tallying every time coding category occurs
- Compare the results from each content analysis
- Correlate the results from each content analysis using an appropriate stats test
- Strong +ve correlation of above +0.8 shows high reliability
Assessing Reliability of Content Analysis using inter-rater reliability
- 2 raters read through the qual data separately + create coding categories together
- 2 raters read exactly the same content but tally/record the occurrences of the categories separately
- They compare the tallies from both raters
- Which are then correlated using an appropriate stats test
- Strong +ve correlation of above +0.8 shows high reliability
Assessing Face Validity of Content Analysis
- independent psychologist in same field
- sees if a coding category looks like it measures what it intends to
- at first sight/face value
- if they say YES, the content analysis is valid
Assessing Concurrent Validity of Content Analysis
- compare the results of new content analysis
- with results from another similar, pre-existing, pre-established content analysis
- if the results from both are similar, can assume test is valid
- the correlation of results gained from an appropriate stats test should exceed +0.8