L8 Survey & Questionaires Flashcards

(38 cards)

1
Q

What is a survey in the context of usability engineering?

A

• A research methodology for gathering information from a sample
• Methods: face-to-face, telephone, self-administered (paper or online)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How does a questionnaire differ from a survey?

A

• A questionnaire is the tool used in a survey
• Composed of written questions to gather specific data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

When is it appropriate to use questionnaires in usability evaluation?

A

• Ideal for evaluating or improving existing systems
• Useful to measure satisfaction, identify feature preferences, or assess user needs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why might questionnaires be unsuitable for open design questions?

A

• People may not know what they want in a new system
• Open-ended design questions lack predefined response sets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the core components of a questionnaire?

A

• Demographic questions: age, gender, occupation
• Factual questions: behavior, ownership
• Non-factual questions: attitudes, beliefs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What types of questions make up a complete usability questionnaire?

A

• Demographic classification
• Factual (e.g., usage, frequency)
• Subjective (e.g., attitudes, satisfaction)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the key types of response formats in questionnaires?

A

• Open-ended (free text)
• Closed-ended: Yes/No, scales, value ranges

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why are satisfaction questions important in usability testing?

A

• Satisfaction reflects subjective usability
• Cannot be measured without user feedback

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does a ‘high quality’ usability questionnaire need?

A

• High validity: measures intended constructs
• High reliability: consistently reproducible results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What traits define valid and reliable questionnaire data?

A

• Validity ensures accuracy
• Reliability ensures consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is the SUS (System Usability Scale) recommended for usability measurement?

A

• Pre-validated scale with known scoring
• Provides a usability score out of 100

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How should you approach sample size for usability surveys?

A

• Aim for at least 10-15 respondents
• Adjust based on project design and goals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the minimum recommended sample size for usability tests?

A

• Around 10-15 participants for meaningful quantitative results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are best practices for demographic question design?

A

• Use ranges (e.g., age: 25–34)
• Avoid asking unnecessary personal info
• Leave demographic questions for the end

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What makes factual questions tricky to design?

A

• Memory recall difficulty
• Bias from sensitive topics
• Misinterpretation of wording

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What should be avoided when asking users to recall past behaviors?

A

• Vague timeframes
• Leading or sensitive wording

17
Q

What defines a well-written non-factual (attitude) question?

A

• Simple, unambiguous language
• No double-barrels or double negatives
• Avoids bias and social desirability

18
Q

Why is pilot testing important in questionnaire design?

A

• Identifies confusing wording
• Ensures clarity and effectiveness before launch

19
Q

What’s the role of multiple items in measuring attitudes?

A

• Increases reliability of the scale
• Helps uncover deeper psychological constructs

20
Q

How can using several related questions improve attitude measurement?

A

• Provides more accurate, stable results
• Covers different dimensions of the same attitude

21
Q

What are common scale types used in questionnaires?

A

• Likert (Agree–Disagree)
• Semantic differential (e.g., easy–difficult)

22
Q

What are design tips for building effective scales?

A

• Use 5–10 points
• Include neutral and ‘not applicable’ options
• Balance between positive and negative items

23
Q

What does this diagram illustrate about attitude scale development?

A
  • A multi-step process: interviews → draft questions → pilot → factor analysis
  • Shows that academic rigor is needed for validity, not suited for quick projects
24
Q

Why is this process rarely feasible in student usability projects?

A
  • Time-consuming
  • Requires large sample sizes and statistical expertise
25
What scale types are depicted in this diagram?
* Agree–Disagree scales (e.g., strongly agree to strongly disagree) * Semantic differentials (e.g., easy … difficult)
26
What design considerations apply to these scales?
* Balance items for neutrality * Choose 5–10 response points for clarity
27
How does this diagram conceptualize “Trust” as a psychological construct?
* Combines related questions to measure trust from multiple angles * E.g., perceived trustworthiness, willingness to share information
28
Why are multiple statements used instead of a single one?
* Increases reliability and captures nuanced attitudes
29
What limitations of non-factual questions are shown here?
* Difficult to verify or validate * Influenced by mood, bias, question framing
30
Why are subjective answers less reliable than factual ones?
* They depend on self-perception and are harder to corroborate externally
31
What four steps do users go through when answering a question, according to the diagram?
1. Understand the question 2. Recall relevant info 3. Decide what to report 4. Match response to available choices
32
How can this process be disrupted?
* Poor wording, unclear timeframes, or biased response sets
33
What does this SUS score scale represent?
* Scores from 0–100 with adjectives like “poor”, “good”, “excellent” * Used to interpret overall usability
34
What SUS score range indicates an acceptable product?
* Typically, a score above 68 is considered above average
35
What four measurement combinations are visualized in this scatter plot?
* High validity + High reliability * High validity / Low reliability * Low validity / High reliability * Low validity + Low reliability
36
Which is the ideal combination for usability questionnaires?
* High validity + High reliability
37
When should questionnaires be used according to the diagram?
* Later in the design process for evaluation * Less suited to early, open-ended exploration
38
What design scenario makes a questionnaire a poor choice?
* Designing a completely new system without prior understanding