Week 4 Test Development Flashcards

(28 cards)

1
Q

Define test conceptualisation

A

Starts with a question.
1. Review the literature - relevant theories/ constructs, definition, parameter setting

  1. Obtain a clear, theory-informed conceptualisation and definition of the target construct
  2. Develop an initial item pool
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define test construction

A

.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define tryout

A

Administering the test on a representative sample using standardised instructions.

Data from the test tryout used to narrow down number of items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are 6 writing guidelines in test construction?

A

-

-

-

-

-

-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are 8 writing guidelines in test construction?

A

-

-

-

-

-

-

-

-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the four types of scales used in psychological test construction?

A

.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the pros/cons of Likert scales?

A
Pros
-
-
-
-

-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the two types of response formats in test construction?

A
  • Likert

- B. C S

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the pros/cons of Likert scales?

A
Pros
-
-
-
-

-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the pros/cons of binary choice scales?

A
Pros
-
-
-
-

-
-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the 4 types of response formats in test construction?

A
  • Likert
  • B. C S
  • P. C
  • C. S
  • E or W
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the pros/cons of binary choice scales?

A
Pros
-
-
-
-

-
-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the pros/cons of essay/ written format tests?

A

-
-

cons
-
-
-
-
-
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the 3 benefits of having an expert review an initial item pool before administering to the target sample?

A
  • confirm/invalidate definition of construct by asking…
  • evaluate the items c___ and c___
  • identify other
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Articulate the criteria that assess whether an item is a ‘good item’

A

good test items = good tests
achieved through item analysis.

criteria:

  • reliability
  • validity
  • discriminates at different level of trait/ability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Articulate the criteria that assess whether an item is a ‘good item’

A

good test items = good tests
achieved through item analysis.

criteria:

  • reliability
  • validity
  • discriminates at different level of trait/ability
17
Q

What are the item properties which you may investigate when trying to assess whether an item is a ‘good item’?

A
  • item difficulty/ distribution
  • ## --
18
Q

what is the formula for item difficulty index?

what does a high index mean?

What must also be considered when looking at optimal item difficulty index?

A

formula:
item difficulty index = examines who answered correctly/ total number of examinees

index= range between 0 & 1

high index =

consideration: probability of ___

19
Q

Binary choice scales (BCS)(true/false) have a high probability of one guessing the item correctly (0.5 out of 1 on the item difficulty-index).

a) What is an optimal item-difficulty index for BCS?

A

a) 0.75 out of 1 on the item-difficulty index. It is better to set the optimal item-difficulty index higher because more people w4

20
Q

Binary choice scales (BCS)(true/false) have a high probability of one guessing the item correctly (0.5 out of 1 on the item difficulty-index).

a) What is an optimal item-difficulty index for BCS?

A Multiple Choice Question test with 4 items has a item difficulty index of 0.25/ 1.

b) What is an optimal item-difficulty index for multiple choice questions?

A

a) 0.75 out of 1 on the item-difficulty index. It is better to set the optimal item-difficulty index higher because test takers have a high chance (0.5) of guessing correctly.
b) 0.625. It is lower because MCQ takers have a lower probability of guessing correctly (0.25/1)

21
Q

In item analysis, there is an item property called dimensionality (factor analysis). Why would you use it?

A

Having a set of items doesn’t mean you have a scale

  • items may not have underlying variable OR
  • they may have multiple underlying variables
22
Q

What can factor analysis help with?

A
  • determining the # of underlying latent variables or constructs
  • help condense information
  • define content or meaning of factors
  • identify that are performing better/worse e.g. items which don’t fit into any factor/fit into multiple –> elimination
23
Q

What are the factor analysis decisions

A

number of factors to extract

  • Eigenvalues (>1)
  • Scree plot

Rotation

  • helps interpret the data
  • oblique: assumes factors are correlated
  • Orthogonal: assumes factors uncorrelated
24
Q

How is item reliability measured?

A

measures internal consistency of test

25
How is item reliability measured?
measures internal consistency of test - item-discrimination index - item-characteristic curve
26
When a test developer is deciding to retain or delete an item, is poor performance on ONE task/ aspect of item analysis sufficient enough to delete the item?
no
27
What is an indication of item with good a good item-characteristic curve?
e.g. individuals with a low ability score lowly, moderate ability score in the middle range and those with a high ability score highly on a test.
28
In the final stage of test development, test revision, one must undergo cross validation. what is cross validation?
administering revised test in another population to determine its applicability for this population. it is a test of validity. sometimes validity shrinkage occurs