Chapter 40 Flashcards
PoHCI, week 6 (10 cards)
What is an evaluation?
The attribution of value. An evaluation states whether something is ‘good’ or ‘bad’. or if it ‘fails’ or is
‘acceptable’.
The idea of evaluating is to compare performance against criteria (a yardstick)
Name some reasons to conduct evaluations
It is nearly impossible to make systems correct in the first attempt
Evaluation is profitable
Evaluation can involve and engage users
Early evaluation helps prevent poorly thought-out or developed ideas from being designed and
introduced to people
Lack of evaluation can negatively reflect on designers and their organizations
What is a formative evaluation?
Before release
Improving an interactive system
When evaluations are conducted to identify features of the system that unexpectedly do not work well for a particular group of users, a particular task or in a particular use situation.
We may then change those features in a future verion of the system
What is a summative evaluation?
After release
Discovering how well an interactive system performs regarding some given objective.
The goal is not to inform the design but to ensure that the system satisfies the objectives (e.g.
requirement specification)
Name some yardsticks you can use to evaluate your system against
No usability problems
Comply with guidelines
Meets usability goals
Compares favorably to X
Compatibility with user’s practice
Meets user requirements
What is the difference between absolute and relative in yardsticks?
Compared to absolute metric
Compared to other systems (relative)
What is analytical evaluation methods?
An evaluator compares an interface to guidelines, principles, or theories of good interaction.
The assessment of the interface does not involve (actual) users
What is empirical evaluation methods?
Users interact with the interactive system, and that is used as the basis for evaluation
Name some central evaluation methods
Heuristic evaluation
Think aloud study
Usability test
Experiment
Deployment study
What does validity, reliability and impact mean in the context of assessing evaluation?
Validity:
Validity of an evaluation is about whether the evaluation result is the real value of the system. For instance, usability problems predicted by an evaluation method should be real problems for real users doing real tasks; otherwise, the evaluation is invalid
Reliability:
Reliability of an evaluation refers to whether the findings of an evaluation would be changed with
another set of evaluators or if it is repeated. If that is the case, the trustworthiness of findings is reduced, and it is unclear if action should be taken on the problems, as they might disappear if the
evaluation was run again
Impact:
Impact: Can the results be used for their intended purpose? Not all evaluations will actually be useful