CH14HCI Flashcards

(100 cards)

1
Q

What is evaluation in the context of system design?

A

A systematic process of assessing effectiveness, efficiency, usability, and quality of a system or product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the main goal of evaluation?

A

To improve the design.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does evaluation involve collecting?

A

Data about users’ experiences interacting with a design artifact.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What two key aspects does evaluation focus on?

A

Usability and user experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why is evaluation important in design?

A

To ensure the product meets user needs and expectations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is iterative design?

A

A continuous process of design and evaluation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What can be evaluated in the design process?

A

Conceptual models, prototypes, finished products.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is formative evaluation?

A

Evaluation during design to check if the product meets users’ needs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is summative evaluation?

A

Evaluation of a finished product to inform future products or comparisons.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

When should evaluation occur?

A

Throughout the design process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Where can evaluations take place?

A

Natural settings, labs, living labs, online.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the role of user feedback in evaluation?

A

Helps designers understand usability and experience issues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are natural settings in evaluation?

A

Real-world environments like public spaces or homes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are controlled settings in evaluation?

A

Environments like usability labs with controlled variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is remote evaluation?

A

Evaluation conducted without being physically present with users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a usability lab?

A

A controlled environment for testing systems with users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is in-the-wild evaluation?

A

Studying user interaction in everyday contexts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Why combine different evaluation methods?

A

To gain different perspectives on usability and experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are evaluation case studies?

A

Real-world examples of evaluations showing method use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How does evaluation support iterative design?

A

By identifying what works and what needs improvement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the three main categories of evaluation settings?

A

Controlled, natural, and settings without users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is usability testing?

A

A method in controlled settings to test system ease of use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is a field study?

A

Observing users in their natural environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is heuristic evaluation?

A

Experts review interface against usability principles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What are walkthroughs in evaluation?
Experts step through tasks to identify usability issues.
26
What are predictive models?
Estimating user performance without direct user involvement.
27
What is analytics in evaluation?
Data-driven insights from user interactions.
28
What is crowdsourcing in evaluation?
Collecting feedback from a large online audience.
29
What is distributed evaluation?
Evaluation involving multiple locations or platforms.
30
What is ecological validity?
The degree to which evaluation mimics real-world use.
31
What are pain points?
Specific areas where users struggle with the system.
32
What are user studies?
Research involving participants interacting with a system.
33
What is the purpose of experiments in evaluation?
To test hypotheses under controlled conditions.
34
What is informed consent in evaluation?
Permission from participants after explaining the study.
35
What is expert review?
Evaluation done by usability professionals.
36
What does modeling involve in evaluation?
Predicting user interaction without direct observation.
37
How are interviews used in evaluation?
To gather detailed user feedback.
38
What role do questionnaires play in evaluation?
Collect structured data from users.
39
What is the goal of testing in evaluation?
To assess usability through performance metrics.
40
What are living labs?
Real-life environments embedded with sensors for long-term studies.
41
What is DeepTake?
A case study evaluating driver behavior with automated vehicles.
42
What is Ethnobot?
A mobile tool used in an ethnographic study at a public event.
43
What is the purpose of eye tracking in studies?
To measure visual attention during tasks.
44
What is an evaluation matrix?
A comparison tool for different evaluation methods.
45
What is Mechanical Turk?
A platform for crowdsourcing research participants.
46
Why evaluate without users?
For cost or feasibility reasons; useful for early prototypes.
47
How do you evaluate engagement?
Through behavior observation, self-report, or biometric data.
48
How is usability different from user experience?
Usability focuses on efficiency and ease; UX includes emotions and satisfaction.
49
What are experience goals?
Desired emotional or cognitive responses during interaction.
50
What is the significance of scope in evaluation?
Determines how broadly the results apply.
51
What is validity in evaluation?
Whether the method measures what it intends to.
52
What is reliability in evaluation?
The consistency of the evaluation results.
53
What are biases in evaluation?
Influences that distort the findings.
54
What does generalizability mean?
The extent results apply beyond the study context.
55
How do you ensure reliability in evaluation?
Use consistent methods and clear procedures.
56
What compromises validity?
Using inappropriate methods or poor implementation.
57
How can bias affect heuristic evaluations?
Experts may focus on certain issues over others.
58
Why is it important to not over-generalize results?
Because findings may not apply to other users or settings.
59
What does informed consent include?
Study purpose, tasks, rights, and data handling.
60
What is a consent form?
A contract between researcher and participant.
61
Who approves evaluation protocols?
Institutional review boards or similar authorities.
62
Why is ethical consideration crucial in evaluation?
To protect participants' rights and wellbeing.
63
What are participant rights?
Informed choice, withdrawal, confidentiality.
64
What data analysis considerations are important?
Bias, reliability, validity, and scope.
65
How can you reduce bias?
Use multiple evaluators and blind procedures.
66
When should users be informed of evaluation details?
Before the evaluation begins.
67
What is the role of demographic diversity in evaluation?
Increases generalizability and validity.
68
What is a methodological limitation?
Constraints that affect the reliability or scope of a method.
69
How is data stored ethically?
Secure, anonymized, and with participant consent.
70
Why evaluate with both experts and users?
To combine professional insight with actual experience.
71
What makes a good evaluation method?
It is reliable, valid, ethical, and appropriate to the context.
72
What is the purpose of evaluation metrics?
To quantify usability and experience aspects.
73
What is an example of a biased result?
When feedback reflects only one demographic group.
74
How can digital tools support evaluation?
Through analytics, remote observation, and feedback collection.
75
Why evaluate across multiple contexts?
To understand usability in diverse real-world scenarios.
76
What connects evaluation to requirement gathering?
Shared methods like interviews and observation.
77
Why are usability labs important?
They provide controlled environments for testing.
78
Why are in-the-wild studies valuable?
They reflect realistic user interactions.
79
What is the benefit of combining methods?
A more complete picture of system usability.
80
What are examples of controlled methods?
Usability testing, lab experiments.
81
What are examples of natural methods?
Field studies, ethnographic research.
82
What are examples of no-user methods?
Predictive modeling, expert review.
83
How do living labs differ from usability labs?
They study long-term use in real environments.
84
What is an evaluation report?
A document detailing findings and recommendations.
85
What is the goal of pilot testing?
To refine the evaluation method before full deployment.
86
How does technology aid in evaluation?
Through sensors, eye tracking, and online tools.
87
What makes a user-centered evaluation?
Focus on actual user needs and experience.
88
What is a takeaway from the DeepTake study?
Automated systems require thoughtful handoff evaluation.
89
What is a takeaway from the Ethnobot study?
Mobile tools can aid real-time ethnographic data collection.
90
How can evaluation support innovation?
By revealing user needs and pain points.
91
What is mixed-method evaluation?
Using both qualitative and quantitative approaches.
92
Why is observation powerful in evaluation?
Reveals behaviors users may not verbalize.
93
What is the difference between formative and summative evaluation?
Formative improves design; summative judges final product.
94
What is triangulation in evaluation?
Combining methods to confirm findings.
95
How do evaluators handle conflicting data?
Analyze context and cross-check sources.
96
Why is continuous evaluation helpful?
To adapt design to evolving user needs.
97
What is a data saturation point?
When no new insights are emerging.
98
How does evaluation influence UX design?
Guides decisions based on evidence.
99
What is the role of feedback loops in evaluation?
To refine products through repeated testing.
100
What is the final takeaway from Chapter 14?
Evaluation is essential, iterative, and diverse in method.