Block 4 - Unit 3: Analytical evaluation Flashcards Preview

M364 Revision > Block 4 - Unit 3: Analytical evaluation > Flashcards

Flashcards in Block 4 - Unit 3: Analytical evaluation Deck (44)
Loading flashcards...
1
Q

Analytical evaluation - key point and examples.

A

Don’t involve users - experts role-play as users, and models predict users’ performance.

Inspection methods - heuristic evaluation and walkthroughs.

3 models - GOMS, keystroke level model and Fitt’s law.

2
Q

Inspection?

A

Generic name for a set of techniques involving experts, or a combination of experts and users, examining a product to predict how usable it is.

Checks whether interface complies with a set of standards, guidelines or design principles.

3
Q

Heuristic evaluation?

A

An inspection technique in which experts, guided by a set of usability principles (heuristics), evaluate whether user interface elements (menus, dialog boxes, etc.) conform to the principles.

Heuristics closely resemble high-level design principles and guidelines, eg. consistent designs, reduce memory load, etc.

4
Q

Advantage- of heuristic evaluations.

A

Sometimes users are not easily accessible, or would involve too much cost / time.

Can be used at any stage of design project, including early on before well-developed prototypes are available.

5
Q

Revised (2006) set of heuristics. (10)

A

Visibility of system status.

Match between system and real world.

User control and freedom.

Consistency and standards.

Error prevention.

Recognition rather than recall.

Flexibility and efficiency of use.

Aesthetic and minimalist design.

Help users recognise, diagnose and recover from errors.

Help and documentation.

6
Q

Visibility of system status (heuristic).

A

Keep users informed through appropriate feedback within reasonable time.

7
Q

Match between system and real world (heuristic).

A

Speak users’ language - words/phrases/concepts familiar to the user (rather than system oriented).

Follow real-world conventions, making ifo appear in a natural and logical order.

8
Q

User control and freedom (heuristic).

A

Users often choose system functions in error - need clearly marked ‘emergency exit’ to leave unwanted state without extended dialog.

Support undo and redo.

9
Q

Consistency and standards (heuristic).

A

Users shouldn’t have to wonder whether different words, situations or actions mean the same thing.

Follow platform conventions.

10
Q

Error prevention (heuristic).

A

Better than good error messages.

Either eliminate error-prone conditions or check for them and present users with a confirmation option before commit.

11
Q

Recognition rather than recall (heuristic).

A

Minimise memory load - make objects, actions and options visible.

Shouldn’t need to remember info from one part of dialog to another.

Instructions for system use should be visibile or easily retrievable when appropriate.

12
Q

Flexibility and efficiency of use (heuristic).

A

Accelerators - unseen by novice user - can speed up interaction for the expert user, hence cater to different experience levels.

Allow users to tailor frequent actions.

13
Q

Aesthetic and minimalist design (heuristic).

A

Dialogs shouldn’t contain info that’s irrelevant or rarely needed.

All extra info competes with relevant info - diminished relative visability.

14
Q

Help users recognise, diagnose and recover from errors (heuristic).

A

Error message should b expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

15
Q

Help and documentation (heuristic).

A

Although better if system can be used without documentation, it may still be necessary to provide.

Any such info should be easy to search, focused on user’s task, list concrete steps to be carried out and not be too large.

16
Q

3 stages of a heuristic evaluation.

A
  1. Briefing session.
  2. Evaluation period.
  3. Debriefing session.
17
Q

Briefing session (heuristic evaluation).

A

Planning is necessary - script, choose tasks / areas of focus and experts.

Ideally expert evaluators will be usability experts, but could choose domain experts or designers with extensive design experience.

Script guides evaluation an ensures consistent briefing.

18
Q

Approach experts may be asked to take for a heuristic evaluation. (3)

A

Set of tasks developed in advance for experts to try.

Expert asked to check each task and task sequence against the whole list of heuristics.

Experts asked to focus on the assessment of particular design features, or any identified usability concerns for the product.

19
Q

Choosing expert evaluators.

A

Rare to find an expert in ID and in the product domain - usual to find 2+ experts with different backgrounds:

  • Usability experts (experienced in conducting evaluations).
  • Domain experts (users or their representatives, designers, developers).
  • Non-experts (may be experts in own domains).
20
Q

Evaluation period of heuristic evaluation.

A

1 - 2 hours for each expert.

1st pass - feel for flow of interaction and product scope.

2nd pass - focus on specific interface elements in the context of the whole product; identify potential UPs.

21
Q

Recording UPs in heuristic evaluation.

A

Data collection form:

  • Location in the task description.
  • Heuristic violated.
  • Usability defect description.
  • Expert evaluator’s comments regarding the usability defect.
22
Q

Debriefing session of heuristic evaluation.

A

Experts discuss each others’ findings and differences of opinions.

Outcome - prioritised list of problems, with severity ratings, and suggested solutions.

23
Q

Problems while doing heuristic evaluations.

A

Different approaches often identify different problems, and heuristics (alone) can miss severe problems.

May also uncover ‘false’ problems, based on own biases and views.
(Having several evaluators can reduce occurrence).

24
Q

HOMERUN? (purpose and each letter)

A

Set of heuristics for evaluating websites.

Some elements (eg. N, U) target commercial / corporate sites, but others (eg. O, E) are appropriate for many sites.

High quality content. (Info / functionality users want).

Often updated. (Importance varies, eg. news, selling, archive content).

Minimal download time.

Ease of use.

Relevant to users’ needs. (Carry out tasks required).

Unique to online medium. (Benefit conventional media doesn’t offer - browsing, 24/7 purchase).

Net-centric corporate culture. (Company needs to put site first in most aspects of operation).

25
Q

Walkthroughs?

A

Alternative form of inspection to heuristic evaluation for predicting UPs without doing user testing.

Involve walking through a task with the system and noting UPs.

Most don’t involve users, but pluralistic walkthroughs do.

26
Q

Cognitive walkthroughs?

A

Simulate users’ problem-solving process at each step in the human-computer dialog, checking to see if users’ goals and memory for actions can be assumed to lead to the next correct action.

Defining feature - they focus on evaluating designs for ease of learning. This focus is motivated by observations that users learn by exploration.

27
Q

Steps in cognitive walkthroughs (5)

A
  1. Characteristics of typical users are identified and documented and sample tasks are developed that focus on the aspects of the design to be evaluated.

Description / prototype of interface produced, along with a clear sequence of the actions needed for users to complete the task.

  1. A designer and 1+ expert evaluators come together to do analysis.
  2. Evaluators walk through the action sequences for each task, placing it within the context of a typical scenario, as they do they try to answer:
    - Will the correct action (to achieve task) be evident to user?
    - Will the user notice that the correct action is available?
    - Will the user associate and interpret the response from the action correctly?
  3. During walkthrough, a record of critical info is compiled in which:
    - Assumptions about what would cause problems and why are recorded.
    - Notes about side issues and design changes are made.
    - Summary of results compiled.
  4. Design revised to fix problems presented.
28
Q

Cognitive walkthrough vs heuristic evaluation.

A

Focus of walkthrough is more to identify specific users’ problems at a high level of detail.

This narrow focus is useful for certain system types, but not others.
Eg. apps involving complex operations to perform tasks.

Very time-consuming and labourious; needs good understanding of the cognitive processes involved.

29
Q

2 problems with original cognitive walkthrough.

A

Takes too long answering the 3 questions in step 3 and discussing answers.

Designers are defensive - lengthy arguments to justify design; undermines efficacy of technique and social relationships.

30
Q

Variation of cognitive walkthrough (to cope with 2 problems).

A

Less questions and curtail discussions.

Analysis more coarse-grained, but much quicker.

Identify leader and usability specialist.

Strong rules - ban on defending design, debating cognitive theory or doing designs on the fly.

(Affect is a more usable technique; directs social interactions of the design team so they achieve goals).

31
Q

Pluralistic walkthroughs.

A

Users, developers and usability experts work together to step through a [task] scenario, discussing usability issues associated with dialog elements involved in the scenario steps.

Each group of experts is asked to assume the role of typical users.

32
Q

Steps in a pluralistic walkthrough (4)

A
  1. Scenarios are developed in the form of a series of hardcopy screens representing a single path through the interface.
  2. Scenarios are presented to the panel of evaluators and the panelists are asked to write down the sequence of actions they would take to move from one screen to another. (Done individually).
  3. Panelists discuss their suggested actions.
    Usually - users go first so not influenced by others or deterred from speaking.
  4. Panel moves on to next round of screens, etc., until all scenarios evaluated.
33
Q

Benefits of pluralistic walkthroughs. (3)

A

Strong focus on users’ tasks at a detailed level, ie. looking at steps taken. (Invaluable for safety-critical systems - one step could be critical).

Approach lends itself well to participatory design practices by involving a multidisciplinary team where users have a key role.

The group brings a variety of expertise and opinions for interpreting each stage of an interaction.

34
Q

Limitations of pluralistic walkthroughs. (2)

A

Have to gather all experts and work at the rate of the slowest.

Only a limited number of scenarios, and therefore paths through the interface, can usually be explored because of time constraints.

35
Q

Cognitive walkthrough (UB)

A

Another technique to predict UPs without user testing or real users.

Evaluates steps required to perform a task, and attempts to uncover mismatches between how users think about a task and how the interface facilitates performance of the task.

So, the usability of the interface is assessed by examining whether a user can select the appropriate action at the interface for each step in the task.

36
Q

Undertaking a cognitive walkthrough (UB)

A

Easy to apply. Simplistically, the evaluator walks through each action in the task trying to find out:

  • Will users know what to do? (step 1)
  • Will users see how to do it? (step 2)
  • Will users understand from the feedback whether the action was correct or not? (step 3)
37
Q

Predictive models.

A

Evaluation of systems without users, but instead of role-playing users, experts use formulas to derive various measures of user performance.

Provides estimates of efficiency of different systems for various kinds of task. Eg. determine optimal layout of phone keys for common operations.

38
Q

GOMS model.

A

Attempts to model the knowledge and cognitive processes involved when users interact with systems.
(Goals, Operations, Methods, Selection rules).

Goals - refer to a particular state the user wants to achieve, eg. ‘find a website on ID’.

Operators - cognitive processes and physical actions that need to be performed to attain those goals.
Eg. ‘decide which search engine to use’, ‘decide and enter keywords’.

Methods - learned procedures for accomplishing goals. They consist of the exact sequence of steps required, eg. drag mouse over entry field, type keywords, press ‘search’.

Selection rules - used to determined which method to select when multiple are available for a stage of a task.
Eg. ‘press enter’ or ‘click “search” button’.

39
Q

Keystroke level model.

A

Provides numerical predictions (unlike GOMS) of user performance.

Tasks compared by time to do using different strategies.

Benefit - different features of systems / apps easily compared to see which might be most effective for specific kinds of tasks.

Standard set of average times for different physical and cognitive actions.
Eg. pointing with mouse, pressing single key, mentally prepare.

40
Q

A difficulty of keystroke level model.

A

When to include ‘mentally prepare’, and the time to allow for it - varies a lot depending on individual.

41
Q

Benefits of GOMS (family)

A

Allows comparative analysis to be performed for different interfaces, prototypes or specifications, relatively easily.

Shown to be useful in helping make decisions about the effectiveness of new products; not often used for evaluation purposes.

42
Q

Limitations of GOMS (family)

A

Study outcomes can be counter-intuitive, eg. some tasks take longer - eg. keystrokes at critical times in task rather than in slack periods.

Highly limited scope - can only really make predictions about predictable behaviour, but people are unpredictable.
Suitable for routine tasks, done by an ‘expert’ without errors.
(But, is useful for providing estimates for comparing efficiency of well-defined tasks).

43
Q

Fitt’s law.

A

Predicts time to reach a target using a pointing device.

ID - time to point at a target based on its size and distance to it.

44
Q

Use of Fitt’s law.

A

Can help decide where to locate buttons, how big and how close together.

Useful where time to physically locate an object is critical to the task in hand.

Useful where there is limited space, eg. mobile devices - trade-off for device size and accuracy / speed.