Block 3 - Unit 1: Understanding and conceptualising interaction Flashcards Preview

M364 Revision > Block 3 - Unit 1: Understanding and conceptualising interaction > Flashcards

Flashcards in Block 3 - Unit 1: Understanding and conceptualising interaction Deck (83)
Loading flashcards...
1
Q

4 key concepts to the design process without concentrating on a detailed design level.

A
  1. Conceptual models.
  2. Interface metaphors.
  3. Interaction types.
  4. Interface types.
2
Q

Problem space.

A

The range of possible conceptual models for a product, together with their rationales (ie. advantages, disadvantages, implications and justifications).

Don’t need detailed conceptual models to understand problem space.

3
Q

How should the design process start? (4)

A

Reflecting on what you intend to create.

Thinking through how it will support people in their everyday or working lives.

Considering whether it will support people in the way you intend.

Justifying why you want to create it.

4
Q

Why not start by thinking about the physical interface, technology and interaction styles?

A

Usability and UX goals can be overlooked.

Better to decide on physical design aspects after describing the nature of the problem space.
Ie. understand and conceptualise current UX / product and how it will be improved or changed.

5
Q

Assumptions and claims.

A

As well as user needs, usability and UX goals, a set of assumptions and claims relating to these is important.

Uncovering and challenging these assumptions will give you a better understanding of what you’re trying to achieve, and will refine the existing goals and requirements.

Assumption - take for granted. Eg. ‘people will want to watch movies on their mobiles’.

Claim - states true when still open to question. Eg. ‘using speech interactive system in a car is safe’.

6
Q

How is articulation of a problem space typically done?

A

As a team effort.

Different perspectives among a team - eg. PM - budget; engineer - technical concepts.

Important implications of pursuing each perspective are considered in relation to one another; time-consuming, but very beneficial.
Less chance of incorrect assumptions and unsupported claims entering the design solution.

Reflecting early on ideas enables more options to be considered.

7
Q

Framework for a team to view multiple perspectives and reveal conflicting and problematic assumptions and claims. (4)

A

Are there problems with an existing product or UX? If so, what?

Why do you think there are problems?

How might proposed design ideas overcome these?

If you have not identified any problems and instead are designing for a new UX, how will the proposed design support, change or extend current ways of doing things?

8
Q

Next step after exploring the problem space?

A

Begin to conceptualise a design solution and develop a conceptual model.

9
Q

Conceptual model?

A

A high level description of how a system is organised and operates.

It is an abstraction that outlines what people can do with a product and what concepts are needed to understand how to interact with it.

10
Q

What a conceptual model is NOT?

A

Not a description of the UI, but a structure outlining the concepts and relationships between them that will form the basis of the product or system.

“Straighten out thinking before starting laying out widgets.”

11
Q

4 components of a conceptual model.

A

Major metaphors and analogies.

Concepts users are exposed to through the product.

Relationships between those concepts.

Mappings between the concepts and UX the product is designed to support / invoke.

  • How the various metaphors, concepts and their relationships are organised determines how the users will subsequently think of a product and the operations they can do with it.
12
Q

Major metaphors and analogies (eg. upgrading web browers).

A

Used to convey to the user how to understand what a product is for and how to use it.

Metaphors:

  • Browsing - following links through exploring what is there.
  • Bookmarking.

Analogy - window shopping.

13
Q

Concepts users are exposed to through product (eg. web browers).

A

Include task-domain objects they create and manipulate, their attributes, and the operations that can be performed on them.

Eg. Web pages (URLs), dynamic and static web pages, links, lists, folders of URLs, saving / revisiting a URL, etc.

14
Q

Relationships between concepts (eg. web browers)

A

Eg.

  • one object contains another,
  • relative importance of actions to others,
  • object part of another.

Eg.

  • folder contains a collection of related URLs
  • ability to add URL to a list is more important than the ability to move the position of a a saved URL around,
  • dynamic page - special kind of page (object is a specialisation of another).
15
Q

Mappings between concepts and the UX (eg. web browsers).

A

Saved URL corresponds to a web page on the internet. When the URL is clicked, the browser points to the page and displays it.

16
Q

Advantage of exploring the relationships between components of a model.

A

Can debate the merits of providing different methods and how they support the main concepts.

17
Q

Benefits of early conceptualisation. (3)

A

Orient the team towards asking specific kinds of questions about how the conceptual model will be understood by the target users.

Not to become narrowly focused early on.

Establish a set of common terms, reducing misunderstandings later.

  • Model becomes a shared blueprint - textual or diagrammatic; basis to develop more detailed and concrete aspects.
18
Q

Dilemma - over-specified apps.

A

Best models appear simple and clear to users and are task-oriented.

May be difficult in practice - models can become complex, with more functions and ways of doing things added to original conceptual model.

19
Q

Simplest way to develop a conceptual model.

A

Say the product will be ‘like’ some other product or system. Then, detailing specifically how the new product will differ.

20
Q

Sources on which to base your decisions, and the kinds of technique for capturing the conceptual model. (3)

A

Output of requirements activity (eg. in Volere shells).

Existing products or systems in same / similar market.

Usability goals and UX goals.

21
Q

Conceptual model (UB).

A

‘…an idealised view of how the system works - the model designers hope users will internalise’.

Model should be as simple as possible while still supporting the required functionality, and there should be a clear mapping between the system and the task domain being supported.

This helps users understand the system so their model of it is as accurate as possible.

22
Q

What a conceptual model is NOT (UB). (4)

A

The User Interface.

Users mental model of the system.

Set of use cases.

An implementation architecture.

23
Q

Why is a conceptual model not the UI?

A

Should describe only what people can do with the system and what concepts they need to understand to operate it.

The UI is only one possible implementation.

Model ignores details of layout, colour, etc. - forces the designer to think abstractly.

24
Q

Why is a conceptual model not a user’s mental model?

A

Mental model - construct in a user’s head consisting of knowledge of how to use something and of how it works.

A conceptual model is generated through the design process, a mental model evolves with use of a system.

Aim to help users develop a mental model of the system that matches designer’s own model.

25
Q

Why is a conceptual model not a set of use cases?

A

Use cases capture task description. They are too detailed for the conceptual model and represent only one possible interaction style.

Also, use case focus at task level - conceptual model is concerned with the system as a whole.

26
Q

Why is a conceptual model not an implementation architecture?

A

The focus is on technology and instantiation issues rather than users’ interaction with the system.

Also, unlikely to be understandable by most users.

27
Q

Interface metaphor.

A

An entity which will be familiar to the intended users, and will form the basis of the design of an interaction.

Considered a central component of a conceptual model. Provides a structure that is similar in some way to aspects of a familiar entity (or entities) but has its own behaviours and properties.

28
Q

Why are metaphors / analogies popular?

A

They can communicate a lot of info, based on experience, relatively quickly.

Using a familiar entity from users’ domain can help explain by comparison something unfamiliar / hard to grasp.

29
Q

Ways metaphors are used in ID. (3)

A

To explore and communicate ideas to other designers (may be otherwise abstract and hard to imagine / articulate concepts and interactions).

May become part of the final system and help users to understand the system.

May surface as names for operations within the interface.

  • Even if metaphors don’t appear in the final system (recognisably), they can help with the thinking process that results in the final conceptual model.
30
Q

Opposition to using interface metaphors. (6)

A

Mistake to design interface metaphor to look and behave literally like the physical entity. Misses the point of the benefit.

Breaks the rules:
- Cultural and logical contradictions accommodating the metaphor in a GUI. Eg. recycle bin on the desktop.

Too constraining:

  • of designer by not providing useful functionality,
  • of user by blinding them to existence of useful functionality, eg. scrolling through list of files.

Conflicts with design principles:
- Designing interface metaphor to fit real-world constraints can force bad design solutions.

Overly literal translation of existing bad designs:
- eg. calculator - poor labelling, hard key sequences.

Limiting the designer’s imagination:
- fixate on ‘tired’ ideas - prevents thinking of new functionality.

31
Q

Examples of misleading metaphors. (2)

A

‘Delete’ command removes file header, but doesn’t remove contents from the disk (privacy / security).

‘Wizard’ metaphor good for guiding novices, or for complex / infrequent tasks.
Not appropriate to take experts, or for frequent tasks, through multiple steps - enforces linear arrangement.

32
Q

Applying knowledge from the physical world to the digital world.

A

Approach to using metaphors in ID - emulate, in the digital world, strategies and methods people commonly use in the digital world.
Eg. electronic ‘post-it’ notes.

Can be counter-productive - forcing users to do things in bizarre, inefficient or inappropriate ways.
Can happen if the activity being emulated is more complex than assumed - so oversimplified.

33
Q

Conceptual model and interaction types.

A

Another way of conceptualising the design space is in terms of the user’s interactions with a system.

This can help designers formulate a conceptual model by determining what kinds of interaction to use, and why, before committing to a particular interface.

Cost etc. may dictate choice, but consideration of interaction types can highlight trade-offs, dilemmas and pros/cons of an interface type.

34
Q

4 main types of interaction (plus 1 sentence on each).

A

(not mutually exclusive)

  1. Instructing.
    User issues an instruction to a system. Eg. type commands, menus.
  2. Conversing.
    Users have dialogue with the system. Eg. speak / type questions; system replies.
  3. Manipulating.
    Users interact with objects in a virtual or physical space. Eg. opening, holding, placing; use knowledge of interacting with familiar objects.
  4. Exploring.
    Users move through a virtual environment or physical space.
    Virtual - 3D worlds, virtual reality systems; familiar knowledge of moving around.
    Physical - sensor based tech - smart rooms, ambient environments; also familiar.
35
Q

Instructing.

A

Describes how users carry out tasks by telling the system what to do.
Eg. to perform operations - tell the time, print file.
Used in diverse products - PVRs, hi-fis, alarm clocks.

Unix / Linux - command-based.
Windows - menu options via mouse and control keys.

Typically commands carried out in sequence, with the system responding appropriately (or not) as instructed.

36
Q

Benefit of ‘instructing’ type.

A

Interaction is quick and efficient.

Particularly fitting where there’s a need to frequently repeat actions performed on multiple objects. Eg. saving, deleting, organising files.

37
Q

Conversing.

A

System acts as a dialog partner, and is designed to respond as a human might.
2-way communication, rather than machine obeying orders.

38
Q

When is conversing used?

A

Most common for apps where the users needs to find out specific kinds of info or discuss issues. Eg. advisory systems, help facilities, search engines.

39
Q

Examples of conversing. (2 categories)

A

Simple voice recognition, menu-driven systems on a phone (banking, ticket booking, train-time enquiries - single word phrases).

More complex natural language based systems (search engines, help systems).

40
Q

Benefit of ‘conversing’ type?

A

A conceptual model that uses a conventional style of interaction allows people, especially novices, to interact in a familiar way.

Eg. ‘Ask Jeeves for kids’ - don’t need to formulate questions in keywords and Boolean logic.

Ikea virtual representative - “do you have kitchen chairs?”

41
Q

Disadvantages of ‘conversing’ type?

A

Potential misunderstandings if the system can’t answer as expected - tends to happen for more complex questions and keyword match doesn’t work.

Certain tasks are transformed into cumbersome and one-sided interactions. Eg. phone system - listening through multiple options at each layer.

42
Q

Manipulating.

A

Capitalises on users’ knowledge of manipulating objects in the physical world.

Virtual objects can be moved, selected, opened/closed.
Extensions - zooming in/out, stretching/shrinking (not possible in real world).

Physical toys/robots - act/react depending on manipulations.
Tagged objects (bricks, balls, etc) - manipulated in physical world, cause digital events to occur - sound, comment, animation.

Direct manipulation framework is influential in informing software design.

43
Q

Advantages of (physical) manipulation type.

A

Encourages creativity and playfulness.

Children are more creative, collaborative and reflective than the software equivalent.

44
Q

Direct manipulation.

A

Proposes digital objects be designed at the interface that can be interacted with in ways analogous to how real physical objects are manipulated.

Users feel they’re directly controlling the digital objects represented by the computer.

45
Q

3 core principles of direct manipulation.

A

Continuous representation of objects and actions of interest.

Rapid, reversible, incremental actions with immediate feedback about the objects or interest.

Physical actions and button pressing, instead of issuing commands with complex syntax.

Eg. dragging a file.

46
Q

Benefits of direct manipulation. (7)

A

Help beginners learn basic functionality quickly.

Enable experienced users to work rapidly.

Allow infrequent users to remember operations.

Prevent need for error messages (mostly).

Show users immediately how actions are furthering goals.

Reduce anxiety.

Help users gain confidence and mastery, and feel in control.

47
Q

Drawbacks of direct manipulation. (2)

A

Not all tasks can be described by objects and not all actions can be undertaken directly.

Some tasks are better achieved through issuing commands.
Eg. ‘find an replace’ to correct repeated spelling mistake - better than manually deleting each one.

48
Q

Exploring.

A

Users move through virtual or physical environments.
Eg. virtual 3D environment, eg. inside a building.

Physical environments can be embedded with sensing technologies that cause responses to presence or certain movements.

Exploits knowledge of how people move and navigate existing spaces.

49
Q

Context aware environments.

A

Location / presence of people, leads to info on a device, or performing an action, eg. changing lights, tourist info on GPS devices.

Smart home - network of sensors for eg. temperature, presence, behaviour.

50
Q

Interaction types compared.

A

Agent (guides, wizards, assistants) - claimed more versatile than direct manipulation or command-based - delegate ‘tedious’ work.
Assumes people like to delegate rather than interact themselves.

Context-aware - monitoring enables timely info to be provided that can be helpful / critical, eg. old person falls.

Problem with above 2 - difficult to accurately predict needs, privacy issues, don’t like being told what to do.

Direct manipulation - allows users to enjoy mastery and being in control. People like to know what’s going on, be involved in action and have a sense of power over the computer.

Command-based - many tasks best carried out at an abstract level, where the user is in full control.
Abstract commands often very efficient and elegant, especially for repetitive tasks on multiple objects, eg. sorting files, deleting many emails, open/close files.
Direct manipulation / delegation can be inefficient / ambiguous.

51
Q

Another way to conceptualise interactions.

A

Coupling - between action and effect.

Tight - action causes obvious, immediate action. Eg. raise arm to turn on light.

Loose - not obvious. Eg. hidden sensor triggers message on someone else’s phone; poss neither will know the cause of the event.

Coupling is a very different from the concept of user dialog / direct manipulation with the system.
Requires different thinking on designing UX.

52
Q

Theories.

A

A well-substantiated explanation of some aspect of a phenomenon.

Numerous theories imported into HCI, providing a means of analyzing and predicting the performance of users carrying out tasks for specific kinds of computer interfaces and systems.

Primarily cognitive, social and organisaitonal in origin.

Eg. theory on memory to represent operations given memory limitations.

53
Q

Benefit of theories in ID.

A

Help to identify factors (eg. cognitive, social and affective) relevant to design and evaluation of interactive products.

Eg. Fitt’s law - predicts time to reach a target using a pointing device.

54
Q

Models.

A

A simplification of some aspect of HCI intended to make it easier to predict and evaluate alternative designs.

Typically, models are abstracted from theories (for a contributing discipline, eg phychology) and can be applied in ID more directly.

Eg. keystroke model - used to predict users’ performance with different interfaces.

55
Q

Frameworks.

A

A set of interrelated concepts and / or a set of specific questions that is intended to inform a particular domain area, eg. collaborative learning, online communities.

They help constrain and scope the UX being designed for.

56
Q

How do frameworks differ from models.

A

They offer advice as to what to design or look for. (A model is a simplification of a phenomenon).

Eg. steps, questions, concepts, challenges, principles and dimensions.

57
Q

Don Norman’s framework - 3 interacting components and what is behind them.

A

Designer: Designer’s model - of how a system should work.

System: System image - how the system actually works is portrayed to the user through the UI, manuals, help, etc.

User: User’s model - how the user understands how the system works.

58
Q

Thinking behind Don Norman’s framework.

A

Powerful visualisation - make explicit the relationship between how a system ‘should’ function, how it is ‘presented’ to the users, and how it is ‘understood’ by the users.

Ideally, users should be able to carry out their tasks in the way intended by the designer by interacting with the system image which makes it obvious what to do.

If the system image doesn’t make the designers model clear to users, they may misunderstand the system, and so use it ineffectively and make errors.

59
Q

Theories, models and frameworks.

A

These aren’t mutually exclusive, but overlap in their way of conceptualising the problem and design space, varying in rigor, abstraction and purpose:

Theories tend to be comprehensive, ‘explaining’ HCIs;
models tend to ‘simplify’ some aspect of HCI, providing a basis for designing and evaluating systems;
frameworks tend to be ‘prescriptive’, providing designers with concepts, questions and principles to consider when designing for a UX.

60
Q

Paradigms.

A

Particular approach adopted by researchers / designers to carry out their work, in terms of shared assumptions, concepts, values and practices.

61
Q

Paradigms examples.

A

80s - prevailing paradigm in HCI was how to design user-centred apps for desktops.
WIMP (Windows, Icons, Menus, Pointer) - used as a way of characterising the core features of an interface for a single user. Superseded by the GUI.

90s - ‘beyond the desktop’ - VR, mulitmedia, pen-based interface, collaborative interfaces and ubiquitous computing; new challenges, questions and phenomena.

One of the main frames of reference, a single user, was replaced be eg. people, places and context.

62
Q

Ubiquitous computing (UbiComp).

A

Radical change in how computers thought of, and interacted with.

Part of the environment - embedded in everyday objects, devices and displays.

Tech should be unobtrusive and largely disappear into the background, and only focus on the device when needed.

63
Q

Challenges, themes and questions raises by UbiComp. (4)

A

How to enable people to access and interact with info in work, social and everyday lives, using a variety of tech.

How to design UXs for people using interfaces that are part of the environment, but no obvious controlling devices.

How and in what form to provide contextually-relevant info to people at appropriate times and places to support them while on the move.

How to ensure info passed around via interconnected displays, devices and objects is secure and trustworthy.

64
Q

Mobile interfaces.

A

Technologies that support mobile interfaces became ubiquitous in the 2000s’.

Physical controls including roller wheel on the phone’s side and rocker dial on the front, for rapid scrolling.

65
Q

Multimodal interfaces.

A

‘More is more’ principle - rich and complex UXs

Multiple modalities - touch, sight, sound, speech.

Interface techniques combined - speech and gesture, eye-gaze and gesture, pen input and speech.

66
Q

Thinking behind multimodal interfaces.

A

Assumption is they can support more flexible, efficient and expressive means of HCI - more akin to the physical world.

Different I/O can be used at once (eg. voice and gesture), or alternately (eg. speech then gesture).

Most common combination - computer speech and vision processing.

67
Q

Attentive environments.

A

Computer attends to user’s needs by anticipating what the user wants to do; response is to expressions and gestures.

Eg. cameras to detect which part of the screen a user is looking at and display accordingly, or turn TV on when looking towards it.

Needs to be very accurate and unobtrusive.

68
Q

Multimodal interfaces - research and design issues.

A

Relies on recognising aspects of a user’s behaviour.
Harder than single modality systems which recognise one aspect of a user’s behaviour.

Most researched modes - speech, gesture and eye tracking.

Key question - what is gained from combinations of different I/Os and if they’re a natural way of interacting with a computer.

69
Q

Shareable interfaces.

A

Designed for more than one person to use; typically they provide multiple inputs.
Eg. SmartBoards - own pen / gestures.
Interactive tabletops - groups interact with the surface using fingers.

70
Q

Advantage of shareable interfaces.

A

Large interactional space than can support flexible group working - groups can create content together at once. (Difficult on a PC).

71
Q

Shareable interfaces - research and design issues.

A

Early - interactional issues, eg. electronic handwriting, moving objects around display.
Since - development of more fluid and direct interaction styles with large displays - freehand and pen-based.

Key issue - can shared surfaces provide new / enhanced collaboration beyond what is possible when groups use personal devices.
How does size, orientation and shape effect collaboration.

72
Q

Tangible interfaces.

A

Sensor-based interaction - physical objects, eg. bricks, balls, cubes, are coupled with digital representations.

Digital effects can be in a number of media and places, or embedded in the physical object.

73
Q

Examples of tangible interfaces.

A

Flow blocks - depict changing numbers and lights in blocks, depending on how they’re connected. Simulate real-life dynamic behaviour.

Physical model superimposed on a digital desktop, eg. to facilitate urban planning.

74
Q

Advantages of tangible interfaces. (3).

A

Encourage learning, design activities, play and collaboration.

Physical objects and digital representations can be positioned, combined and explored in creative ways - dynamic info presented in different ways.

Physical objects can be held in both hands and combined / manipulated in ways not possible with other interfaces; can see / understand situations differently - more insight and problem-solving.

75
Q

Tangible interfaces - research and design issues.

A

Different from GUIs - alternative conceptual frameworks.

Rather than design dialog between user and system, there’s a notion of couplings between action and effect often used.

Key design concern - kind of coupling to use.
Where is feedback provided; type and placement of digital media.

76
Q

Augmented and mixed reality interfaces.

A

Virtual representation on physical devices and objects - views of the real world combined with views of a virtual environment.

77
Q

Examples of augmented / mixed reality. (4)

A

Medicine - eg. X-rays / scans overlaid on patients body.

Air traffic control - dynamic info about aircraft overlaid on video of real planes.

Head up displays (HUDs) - markers to aid landing in poor visibility.

Maps overlaid with images.

78
Q

Augmented / mixed reality - research and design issues.

A

Key concern - what form digital augmentation should take and when and where it should appear in the physical environment.
Info needs to stand out but not distract.

79
Q

Wearable interfaces.

A

First developments - head- and eyewear-mounted cameras to record and access digital info while on the move. Possible to have instant info in front of your eyes while on the move (without pulling out a device).

Also, technology incorporated with jewelry, cups, glasses, shoes, jackets; user can access digital info on the move.

Eg.

  • tour guides to give info as you move through somewhere.
  • Tiny screens on glasses, mp3 controls in a ski jacket.
  • Health monitors.
80
Q

Wearable interfaces - research and design issues.

A

Core concern - comfort, and whether the device is a fashion item or hide-able.

Washing clothes, battery life, how to control embedded devices.

81
Q

Robotic interfaces (examples).

A

Manufacturing.

Remote investigation of hazardous locations.

Console interfaces - live video, joysticks, sensors, maps.

Domestic robots - help with certain activities; ‘pets’ with sensors to respond to certain behaviours.

82
Q

Robotic interfaces - research and design issues.

A

What is special about a robotic interface and how it differs from others.

Typically exhibit behaviours - facial expressions, walking, talking.

Moral issue - should anthropomorhism be encouraged, or look and behave as robots? Should interactions b human-like or more like HCI.

83
Q

Which interface?

A

Often, the requirements for the UX that have been identified during the design process will determine the interface type appropriate and the features required.

Which is most appropriate, useful, efficient or engaging of the alternatives depends on factors including reliability, social acceptability, privacy, ethical and location concerns.

Considering different types of interface as a key part of conceptualising the interaction you want to design.
Important not to be focused too heavily on one type to the exclusion of alternatives.