Flashcards in Usability Evaluation (Week 4) Deck (28):
Five Components of Usability (according to Nielson)
How easy is it for users to accomplish basic tasks the first time they encounter the design?
Once users have learned the design, how quickly can they perform tasks?
When users return to the design after a period of not using it, how easily can they reestablish proficiency?
How many errors do users make, how severe are these errors, and how easily can they recover from the errors?
How pleasant is it to use the design?
How well a system satisfies its intended use.
How much value a system offers to its user.
How desirable the system is for its user.
Why Things are not usable (according to Rubin & Chisnell)
1. Development focuses on the machine or system.
2. Target audiences change and adapt.
3. Designing usable products is difficult.
4. Team specialists don’t always work in integrated ways.
5. Design and implementation don’t always match.
How to make things usable (according to Rubin & Chisnell)
1. Early focus on users and their tasks
2. Evaluation and measurement of product usage
3. Iterative design
Empirical Methods of Usability Evaluation
1. Usability Testing
2. Performance Evaluation
3. Behavior Evaluation
4. Measuring Satisfaction
Non-empirical methods of Usability Evaluation
1. Cognitive modeling
2. Heuristic evaluation
3. Cognitive walkthrough
Five Characteristics of Usability Evaluation according to Dumas and Redish (1999)
1. The goal is to improve usability
2. The participants represent real users.
3. The participants do real tasks.
4. You observe and record what the participants do and say.
5. You analyze the data, diagnose the real problems, and recommend changes to fix the problems.
Why usability evaluation is NOT a scientific experiment
1. Experimental control is not as necessary
2. Data measurement is not as precise
3. Changes can be made mid-test to explore alternatives
4. The exact same test does not have to be repeated precisely for each user.
5. The number of participants is fewer in usability testing than in scientific experiments.
6. Hypotheses and inferential statistics are used less often.
Experimental design (Research) according to Lazar, Feng, & Hochheiser
1. Isolate and understand specific phenomena with the goal of generalization to other problems
2. A larger number of participants is required
Experimental design (Usability testing) according to Lazar, Feng & Hochheiser
1. Find and fix flaws in a specific interface, not goal of generalization
2. A small number of participants can be utilized
Ethnography (Research) according to Lazar, Feng, & Hochheiser
1. Observe to understand the context of people, groups, and organizations
2. Researcher participation is encouraged
3. Longer-term research method
Ethnography (Usability testing) according to Lazar, Feng, & Hochheiser
1. Observe to understand where in the interface users are having problems
2. Researcher participation is not encouraged in any way
3. Short-term testing method
Ethnography and experimental design (Research) according to Lazar, Feng & Hochheiser
1. Used to understand problems or answer research questions
2. Used in earlier stages, often separate from (or only partially related to) the interface development process
3. Used for understanding problems
Ethnography and experimental design (Usability testing) according to Lazar, Feng & Hochheiser
1. Used in systems and interface development
2. Typically takes place in later stages, after interfaces (or prototypes) have been developed
3. Used for evaluating solutions
Formative vs Summative Tests
Formative tests help “form” a design.
They are quick and dirty and run during many stages of development.
Summative tests are run “at the sum” of the project.
They give final results about a more advanced prototype.
Usually more formal and thorough than a formative test.
Might lead to a report that you pass to a product management team.
Stages of a Usability Test (according to Rubin & Chisnell)
1. Develop the test plan
2. Set up the test environment
3. Find and select participants
4. Prepare test materials
5. Conduct the test sessions
6. Debrief the participants
7. Analyze data and observations
8. Report findings and recommendations
Parts of a Test Plan (according to Rubin & Chisnell)
1. Purpose, goals, and objectives of the test
2. Research questions
3. Participant characteristics
4. Method (test design)
5. Task list
6. Test environment, equipment, and logistics
7. Test moderator role
8. Data to be collected and evaluation measures
9. Report contents and presentation
Types of Data
1. Objective measurements
2. Behavioral measurements
3. Subjective measurements
Things that can be measured objectively the same by everyone
Time, errors, confusions, breakdowns, workarounds, successes, and failures.
Mostly your observations
Notes about where, when, why and how the above things occurred.