Questionnaire Design and Analysis Flashcards

1
Q

why use questionnaires?

A
  • Objective (factual, standardised) measurement of skill, knowledge, attitudes and behaviours

Comparisons between individuals or groups, associations between factors etc

Cost-effective

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

why use standardised measures?

A
  • Reliability (consistency)
    ○ Test-retest
    • Validity (measure what intended to?)
      ○ Face validity
      ○ Ecological validity
    • Ease of comparison with other studies
    • Time/money
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what are standardised scales?

A

Identical materials

Consistent scoring procedures

Clear guidelines on how to administer the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

benefits of standardised scales?

A

Norms provided for comparison

Can make inferences about your sample in comparison to published norms

Can also be used to determine clinical cut-offs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

definition of latent variable

A

“hypothetical constructs that cannot be directly measured” (MacCallum & Austin, 2000)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

questionnaires and latent variables

A

Questionnaire design involves developing a pool of items or questions that can tap into measuring the latent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what are uni-dimensional scales?

A
  • All of the items on the scale measure the same thing
    ○ No identified subscales
    • Global score determined by one underlying construct
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what are multidimensional scales?

A

All items in scale are still related to and measure the latent variable but there may be groups of questions that intercorrelate more highly than others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

why use multiple items/variables

A

Individual differences

Differences in interpretation

Differences in context

Accidently missing a question

Response biases (circling 4 for the previous 2 questions)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

development process when constructing a scale

A

Hypothesis the conceptual framework

Generate item pool and draft instrument

Confirm conceptual framework and assess properties
○ Once you have the item pool…
Pilot the instrument with the relevant group
Perform item analysis
* Identify which items relate most closely to the latent variable
* Examine item-total correlations and internal consistency statistics

Collect and analyse data (item and factor analysis)

Modify instrument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

internal consistency statistics - cronbach alpha

A

Cronbach alpha calculates the average of every possible half of the items correlated with every other possible half of the items
○ >0.7 critical value (Kline, 1999)
○ (not bigger than 0.95 though)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is factor analysis

A
  • Collect sufficient sample size to allow analysis of the internal structure of data –> factor analysis
    • A method to assess the extent to which a questionnaire is measuring a latent variable
    • Explores correlation patterns between items to establish an underlying structure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

types of factor analysis

A

confirmatory and exploratory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

advantages of factor analysis

A

○ Can reveal patterns within the data

	○ Reduces data to ensure only items related to underlying construct are retained

	○ Can lead to the development or refinement of new theories
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

disadvantages of factor analysis

A

○ Researcher bias

	○ Lack of consistency in methods and cut-off criteria

	○ Garbage in, garbage out

	○ Time and resources
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

modification of questionnaires

A
  • Change items and factor structure over time as necessary
    • E.g. trialling the measure in a population for which it was not originally designed
17
Q

internal consistency - reliability types

A

split half and cronbach alpha

18
Q

split half reliability

A

correlation between one half of the items with the other

19
Q

cronbach alpha reliability

A

average of all possible split-half reliability scores

20
Q

type of reliability for stability over time

A

test-retest

21
Q

test-retest reliability

A

extent to which responses on a measure remain stable over time

22
Q

what is validity?

A

does it measure what it intends to?

23
Q

types of validity

A

face
content
concurrent
construct

24
Q

preliminary checks for factor analysis

A

sample size
levels of measurement
normality and outliers
factorability

25
Q

eigenvalues - what are they?

A
  • The amount of variance which a factor can account for in the data
26
Q

which eigenvalue is the largest?

A

always the first one

27
Q

thresholds for eigenvalues

A
  • Different thresholds: some say eigenvalues over 1 are considered ‘stable’; some say >0.7