W11 - Reliability Flashcards

(56 cards)

1
Q

What is the difference between psychological research and psychological assessment

A

Psychological Research:

Generalisations about representative samples of people.

Psychological Assessment:

Generalisations about specific individuals (n = 1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are some psychological assessment standards

A
  1. Nature of underlying construct(s)
  2. Basic psychometric principles and procedures: Requirements and Limitations
  3. Directions for administration and properties: Purpose, relevant standard errors, reliabality and validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a valid test

A

A test is valid if it accurately measures what it purports to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a reliable test

A

Property of consistency in measurement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Reliability and Validity. Necessary and Sufficiency.

A

Reliability is a necessary, but insufficient, requirement for validity.

(i.e. valid test cannot be unreliable, but reliable test may not valid)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Is reliability a binary decision?

A

No. Reliability is continous, not categorical (Reliable/Not Reliable)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the first equation of Classical Test Theory

A

Xi = T + Ei

Xi = Observed score on test occassion i

T = True Score

Ei = Error on test occassion i, Unsystematic variance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the two properties of errors in classical test theory

A
  1. Endogenous: Factors about test-taker (client’s condition)
  2. Exogenous: Factors outside test-taker (psychologist measurement)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the assumptions of Classical Test Theory

A
  1. Expected value of error = 0
  2. Errors do not correlate with one another
  3. Errors do not correlate with true scores
  4. Expected value of test = True Score (On repeated administration of test, on average people will score their true score)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Elaborate on first assumption of classical test theory

A
  1. Expected value of error is zero

When all the errors on different test occassion adds up, it will be equal to 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Elaborate on the second assumption of classical test theory

A
  1. Errors do not correlate with one another

Errors on Testi does not affect error on Testj

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Elaborate on the third assumption of classical test theory

A
  1. Errors do not correlate with true score

r (te) = 0. Positive/Negative error is unrelated to true score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Elaborate on the fourth assumption of classical test theory

A
  1. Expected value of test equals to true score

Average of all observed scores = True Score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the second equation of Classical Test Theory

A

𝜎2x = 𝜎2t + 𝜎2πœ– + 2cov(t,πœ–)

𝜎2x : Variance of observed scores

𝜎2t : Variance of true scores

𝜎2πœ–: Variance of error scores

2πΆπ‘œπ‘£(𝜏,πœ–): Covariance between true scores and error scores, which is 0 under assumption (3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the third equation of classical test theory, relating to how reliability is calculated

A
  • π‘…π‘’π‘™π‘–π‘Žπ‘π‘–π‘™π‘–π‘‘π‘¦
  • 𝜌2π‘₯𝜏
  • (𝜎𝜏2/𝜎π‘₯2)
  • (𝜎𝜏2)/ (𝜎𝜏2+πœŽπœ–2)
  • (π‘†π‘–π‘”π‘›π‘Žπ‘™)/(π‘†π‘–π‘”π‘›π‘Žπ‘™+π‘π‘œπ‘–π‘ π‘’)

𝜌2π‘₯𝜏: Theoretical reliability coefficient

𝜎𝜏2 : Variance True Scores

𝜎π‘₯2 : Varaince Observed Scores

πœŽπœ–2 : Variance Error Scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Persepctives on reliability: Conceptual and statistical basis

A

Conceptual: Observed score in relation to:

(a) True Score; (b) Measurement error

Statistical; (a) Proportion of variance (b) Correlations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Perspective 1: Using true score and proportion of variance

A

Ratio of true score variance to observed score variance (Same as second equation)

rxx = St2/ Sx2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Perspective 2: Using measurement error and proportion of variance

A

Reliability is the lack of error variance

rxx = 1 - (S2πœ–/ S2x)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Perspective 3: Using true score and correlations

A

Reliability is the squared correlation between
observed scores and true scores

rxx = r2xt

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Perspective 4: Using measurement error and correlation

A

Reliability is the lack of correlation between
observed scores and error scores

rxx = 1 - r2xπœ–

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Reliability in practice, since we do not know the true score varaince, what are some assumptions we add on in a parallel test.

A

We run parallel test: This test must have:

(1) Tau equivalent. True score on both test is the same
(2) Same level of error variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the 3 ways of testing reliability

A
  1. ) Test-retest
  2. ) Parallel-form reliability
  3. ) Split-half reliability

All three assumes that they are parallel forms of the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is test-retest reliablity

A

Correlation between original test and retest (Same test, different time)

24
Q

What is the use of test-retest reliability

A

Useful for stable traits, not useful for transient states

25
What are the cons of test-retest relability
* Carryover Effects (Smaller gap between test and re-test) * Googling Answers * Bored * Remember test items * True score may vary * Participants may fail to return (Bigger gap between test and re-test) Trade off between paticipants failing to return and carry over effects
26
What is parallel-form reliability
Correlation between two parallel forms of the test
27
Parallel form reliability: What must be ensured?
1. Parallel form must measure same set of true scores 2. Parallel form must have equal vairance as original form
28
What are the pros of parallel form reliablity
It can be used on the same day
29
What are the cons of parallel form reliablity
* Might not truly be parallel * Affects true score * Carryover effects * Even though there is no direct memory effects from orginal test, they might still learn stuff
30
What is split-half reliability
Correlations between 2 sub-tests split from 1 test
31
What is the pros of split-half reliability
Only one test. Easy
32
What are the cons of split-half reliability
* Might not truly be parallel * *Deflation* of reliability estimate as subtests have only half the items compared to main test
33
What is cronbach's alpha
Means of all possible split-half reliabilites, scaled up to a full test instead of a half-test
34
Is cronbach's alpha legit. Why?
Not really. Provides a conservative, lower-bound estimate for reliability and recent study suggest it's of limited use.
35
What does the reliability coefficient (rxx) fail to do?
It does not tell us in test score units how much measurement error is 'typical' as it is not expressed in test units.
36
What is standard error of measurenet (SEm)
Average error score (i.e. SD of erorrs)
37
What is the formula for SEm
SEm = sx [√(1 - rxx)] sx = standard deviation of observed scores rxx = reliability coefficient
38
If the test is completely unreliable, what is the standard error of measurement
SEm = sx [√(1 - rxx)] If rxx = 0 Hence, SEe = Sx Standard error of measurement = Standard deviation of observed scores
39
If the test is completely reliable, what is the standard error of measurement
SEm = sx [√(1 - rxx)] Since rxx = 1, Hence, SEe = 0 No standard errors of measurement
40
What is the direction of association between reliability and SEm as a proportion of SD
Negative. As reliability increases, SEm decreases
41
According to Nunnally, what is the reliablility needed?
Bare minumum = 0.9 Desirable = 0.95
42
What is the equation to predict a client's true scores
T\_hat = (rxx)(x) + (1 - rxx)(πœ‡T) * T\_hat = predicted true score * rxx = reliability * x = observed score * πœ‡T = population mean for the test (e.g. IQ = 100)
43
What if the predicted true score if the test was completely unreliable
T\_hat = (rxx)(x) + (1 - rxx)(πœ‡T) If rxx = 0, T\_hat = πœ‡T (population mean)
44
What if the predicted true score if the test was completely reliable
T\_hat = (rxx)(x) + (1 - rxx)(πœ‡T) If rxx = 1, T\_hat = x (that is, the observed scores)
45
What is the direction of association between reliability and predicted true scores (T\_hat)
As reliability increases, T\_hat moves closer to observed scores. As reliability decreases, T\_hat regress towards the population mean.
46
What is true score confidence intervals built upon
Standard error of estimation
47
What is the equation for standard error of estimation
SEe = Sx [√rxx(1 - rxx )] * Similar to standard error of measurement (SEm). but with an extra rxx. * Note: Sx = Standard Deviation of Test
48
What defines the 95% CI for predicted true scores
Lower Bound: [T\_hat - (1.96 x SEe)] Upper Bound: [T\_hat + (1.96 x SEe)]
49
What are the correlations between measures compared to correlations between constucts. Why?
* **Observed correlations** between two _measures_ x and y is always lower than **true correlation** between _underlying constructs_ * Because observed correlations is attenuated/reduced by measurement error
50
What does the disattenuation formula aim to
* Estimates the correlation if 2 constructs were not affected by measurement error * Corrects for the fact that measurement error attenuates the correlation between 2 constructs measured
51
What is the maximum correlation between 2 measures x and y
Max rxy = √rxxryy * rxx * ​Reliability of test x * rxy * ​Reliability of test y * rxy * ​Observed correlation of underlying constructs x and y
52
What is the disattenuation formula
r'xy = rxy / √rxxryy * r'xy * ​Correlation between 2 constructs without measurement error * rxy * ​Obseved correlation between 2 constructs * rxx * Reliability of construct x * ryy * ​Reliability of construct y
53
How to increase the correlation between test scores and constructs
* Increase the relationship between construct and test * Quality of Items * Remove inconsistency in test administration and interpretation * Reduce measurement error (Exogenous) * Increase no. of test items * Quantity of items
54
What is the Spearman-Brown Prophecy Formula
r'xx = (nrxx) / [1 + (n-1)(rxx)] * r'xx * ​Reliability of expanded test * rxx * ​Reliability of original test * n * Expansion factor (e.g. n = 2 means double items)
55
What is the relationship between expansion factor and reliability of expanded test. What is the caveat?
* Negative acceleration * Which has practical benefit * New items must be as good. If not, reliability of expanded test (r'xx ) might be even wprse
56
If we know what is our desired reliability, what is the forumla to work out the expansion factor
n = (r'xx)(1 - rxx) / (rxx)(1-r'xx) r'xx = Reliability of expanded test rxx = Reliability of original test