Block 2 - Unit 2: Establishing an initial set of requirements Flashcards Preview

M364 Revision > Block 2 - Unit 2: Establishing an initial set of requirements > Flashcards

Flashcards in Block 2 - Unit 2: Establishing an initial set of requirements Deck (145)
Loading flashcards...
1
Q

What are we trying to achieve in the requirements activity? (2)

A

Identify needs:
Understand as much as possible about users, their work, and the context of that work, so the system under development can support them in achieving goals.

Establish requirements:
Produce, from needs identified, a set of stable requirements than form a sound basis to move forward into thinking about design.

2
Q

Importance of getting the requirements right.

A

Cost of fixing errors late in the software development cycle is significantly higher than during the requirements activity.

3
Q

Requirements gathering / capture?

A

Implies requirements exist out there and we just need to get hold of them.

4
Q

Requirements elicitation?

A

Implies ‘others’ know the requirements and they just need to tell us.
Even if they ‘have’ them, they may not have articulated them yet, or have explored them in enough detail.

5
Q

Requirements analysis?

A

Investigate and analyse an initial set of requirements that have been gathered, elicited or captured.

Important step, as interpretation of facts (rather than facts themselves) inspires the design.

6
Q

Requirements engineering?

A

Recognises that developing a set of requirements is an iterative process of evolution and negotiation, and needs to be carefully managed and controlled.

7
Q

‘Establishing’ requirements?

A

Choosen to represent the fact that requirements arise from data gathering, analysis and interpretation activities and have been established from a sound understanding of the users’ needs.

Also implies requirements can be justified by, and related back to, the data collected.

8
Q

An aim of the requirements activity.

A

To make requirements as specific, unambiguous and as clear as possible.

9
Q

Functional requirements?

A

Say what the system should do.

Fundamental to understand these for an interactive product.

10
Q

Non-functional requirements?

A

Describe the various constraints there are on the product.

Can be technical (eg. needs to interface with another system);
or non-technical (eg. needs to support a particular type of user).

11
Q

Examples of non-functional requirements. (4)

A

Must be able to run on a variety of platforms.

Target platform is expected to have at least 1 GB of RAM.

Must be delivered in 6 months time (constraint of development activity rather than product).

Interactive products in general - physical size, weight, colour and production feasibility.

12
Q

Data requirements? (6)

A

Type.

Volatility.

Size / amount.

Persistence.

Accuracy.

Value.

13
Q

Environment requirements?

A

Context of use - circumstances in which interactive product is expected to operate. (4 aspects)

14
Q

4 aspects of environmental requirements.

A
  1. Physical environment.
    Lighting, noise, dust, etc., protective clothing worn?, crowded.
  2. Social environment.
    Collaboration / coordination.
    Eg. data shared? Synchronously or asynchronously?
    Physical location of other team members.
  3. Organisational environment.
    Eg. how good is user support? Facilities / resources for training? How efficient / stable is communication structure?
  4. Technical environment.
    Eg. What technologies will the product run on or need to be compatible with?
    What technological limitations might be relevant?
15
Q

User characteristics?

A

Capture the key attributes of the intended user group - the properties of the users that impact on ID.

16
Q

Some key user characterisitics.

A

Abilities and skills:
Novice - step-by-step instructions, prompting, constrained interaction.
Expert - flexible interaction with more wide-ranging powers of control.

Nationality, education, preferences, personal circumstances, physical or mental disabilities.

17
Q

User profile.

A

The collection of attributes of a class of ‘typical user’.

One product may have several different user profiles.

18
Q

Persona?

A

Rich description of typical users of the product under development that designers can focus on and design for.

Precise, credible details helps see personas as real potential users, and hence as people they can design for.

19
Q

Details included in a persona?

A

Unique set of goals (inc UX).

Skills, attitudes, tasks and environment (detailed and specific).

Name, photo, personal details (leisure activities etc).

20
Q

MoSCoW?

A

Prioritisation - ‘Must have’; ‘Should have’; ‘Could have’; and ‘Won’t have right now’.

21
Q

4 issues for success of data gathering / recording sessions.

A

Setting goals.

Relationships between collector and provider (of data).

Triangulation.

Pilot studies.

22
Q

Main reason for gathering data?

A

Glean information about something.

Eg. understand how technology fits into family life;
identify which icon best represents ‘send email’.

23
Q

Why set goals for data gathering?

A

Many reasons for gathering data - need to identify specific goals.

Goals will influence the nature of sessions, gathering techniques and the analysis to be performed.

Once goals set, you can concentrate on what data to look for and what to do with it then.

24
Q

Purpose of informed consent forms.

A

Gatherer - wants to know data can be used in analysis, presented to interested parties and be published in reports.

Provider - knows information will not be used for other purposes, or in a context that would be detrimental.
Children (parent signs) - no threatening / inappropriate / embarrassing questions.

25
Q

Triangulation (def, example and reason)

A

Strategy of using more than one data gathering technique to tackle a goal, or using more than one data analysis approach on the same set of data.

Eg. observation to understand context of task performance, interviews to target specific user groups, questionnaires to reach a wider population, and focus groups to build a concensus view.

Provides different perspectives and corroboration of findings across techniques, therefore leading to more rigorous and defensible findings.

26
Q

Pilot studies.

A

Small trial run of main study to make sure method is viable.

Data gathering participants are usually unpredictable, even with careful planning.

Potential problems can be identified and rectified.
Eg. equipement, instructions, questions.

27
Q

Comments on pilot studies. (2)

A

Can use colleagues / peers for pilot study if difficult to find participants - quick and cheap.

Anyone involved in the pilot study can’t be involved in the main study - they will know more about it, which can distort the results.

28
Q

Most common data recording methods. (4)

A

Taking notes.

Audio.

Photos.

Video.

29
Q

Other data recording methods.

A

Questionnaires and diary notes ‘self-documenting’ (participant completes so no further recording needed).

Interaction logs usually generated automatically.

30
Q

Factors affecting choice of data recording method.

A

Context, time available and sensitivity of situation - choice impacts on how intrusive data gathering is.

Most settings - audio, photos and notes sufficient.

Sometimes video is essential to record in detail intricacies of the activity and its context.

31
Q

Notes plus still camera.

A

Notes - can be difficult / tiring to write and listen / observe; easy to lose concentration and bias to occur - can help to have someone else to do it.

Handwritten notes need to be transcribed - can be first step in data analysis.

Digital images can easily be collected (with permission).

Photos / sketches can be used to capture other images and docs needed.

32
Q

Audio plus still camera.

A

Interviewer can pay more attention to interviewee, and is less intrusive than video.

May only need to transcribe sections - many studies don’t need high detail.

Audio can be supplemented wit photos of artifacts / events / environments.

33
Q

Video.

A

Can be intrusive.

Attention is focused on what is seen through the lens - miss things out of view.

Sound can often be muffled - air con, wind, etc.

34
Q

4 main types of interview.

A

“Conversation with purpose.”

Open-ended / unstructured.

Structured.

Semi-structured.

Group interviews.

(First 3 named according to how much control the interviewer imposes by following predetermined questions).

35
Q

Choice of interview choice (and examples)

A

Depends on purpose, questions addressed and stage in lifecycle.

Eg. 1st impressions on new design - open-ended.

Feedback on particular design feature - structured or questionnaire. (More specific).

36
Q

Unstructured interviews.

A

Exploratory and more like conversations, often going into considerable depth.

Open questions:

  • no particular expectation on format / context of answers.
  • used to explore range of opinions

Interviewee free to answer as fully or briefly as they want and both parties can steer the interview.

37
Q

Some further points on unstructured interviews. (2)

A

Advisable to have a plan of main topics to be covered. (Having on agenda is different than being open to new information).

Balance between making sure answers to relevant questions obtained, and following new lines of enquiry not anticipated.

38
Q

Benefits and cost of unstructured interviews.

A

Generates rich data - gives deep understanding of topic, and often interrelated and complex.

May mention issues not considered.

Cost:

A lot of unstructured data takes time to analyse.

Can’t replicate process as each interview takes on its own format.

39
Q

ID and unstructured interviews.

A

Usually no attempt to analyse each interview in detail. Instead, use notes / audio to later go back to find main issues of interest.

40
Q

Structured interviews.

A

Predetermined questions similar to a questionnaire.

Useful when goals are clearly understood and specific questions can be identified.

Questions typically closed - set of predetermined answers - and best if short and clearly worded.

Standardised - same wording and order with each participant.

41
Q

Semi-structured interviews.

A

Combines features of (un)structured interviews, using open and closed questions.

Consistent - basic script for guidance, so same topics covered.

Starts with preplanned questions, then probes to say more until no new relevant information emerges.

42
Q

Some points for conducting a semi-structured interview. (5)

A

Shouldn’t pre-empt an answer, eg. “You seemed to like…” makes an assumption and encourages an answer that doesn’t conflict. (Especially true for children).

Body language can influence whether someone will agree with a question.

Need to allow time to speak.

Probes used to get more information, especially neutral probes, eg. “Do you want to say anything else?”; or can prompt to help along if they’ve forgotten something.

Intended to be broadly replicable, so probing / prompting should aim to help along without introducing bias.

43
Q

Focus groups.

A

Usually 3 - 10 (representative) people led by a trained facilitator.

In requirements activities it’s quite common to hold a focus group in order to identify conflicts in terminology or expectations from different sections in one department or organisation.

44
Q

Benefit of focus groups.

A

Allows diverse or sensitive issues to be raised that might otherwise be missed. Assumes people develop opinions within a social context by talking to others.

45
Q

Some points about conducting focus groups.

A

Question may seem simple, but the aim is for people to put forward their own opinions in a supportive environment.

Agenda developed to guide, but flexibility to follow unanticipated issues.

Facilitator guides / prompts, tries to get quiet ones to participate and prevent others from dominating.

Usually recorded for later analysis and people asked to explain comments further.

46
Q

Planning and conducting interviews. (5 sections).

A

Developing questions / topics.

Collating documents to give interviewee (consent form, project description, etc).

Check / test equipment.

Work out structure.

Organise time / place.

47
Q

Developing open / closed questions.

A

Open - goal of session exploratory.

Closed - need to know possible answers in advance; ideally ‘other’ not often used as an option.

48
Q

Robson (2002) - guidelines for developing interview questions. (3)

A

Compound questions are confusing - split into separate ones.
Easier to respond and record.

People may not understand jargon or technical language and might not admit it, so explain in layman’s terms.

Try to ask neutral questions. Eg. not “why do you like…” - assumes they do like it and will discourage genuine answer.

49
Q

Before the interview.

A

Make sure aims of the interview have been communicated to, and understood by, the interviewee, and that they feel comfortable.

Eg. finding about their world so you can dress, act and speak in a familiar manner.

50
Q

Robson (2002) - Running the interview.

A

Better to listen more than to talk, respond with sympathy but without bias.

  1. Intro - explain purpose, reassure about ethical issues, ask if ok to record. Same for each.
  2. Warm-up session - easy, non-threatening questions first.
  3. Main session - logical sequence, more probing questions at the end.
    Semi-structured - order may vary according to natural course.
  4. Cool-off period - a few easy questions (defuse tension).
  5. Closing session - thanks and switch of equipment to mark end.
51
Q

Other forms of interview. (2)

A

Telephone - if you can’t meet up. Like face-to-face but without the body language.

Online - either asynchronous (eg. email) or synchronous (eg. instant messaging).
May be preferable for sensitive issues to be anonymous.

52
Q

Enriching the interview experience.

A

As often in neutral environment (away from desk) and is an artificial situation (away from normal tasks), it can be difficult to give full answers.

To help - props, eg. prototypes or work artifacts, or descriptions of common tasks.
These can provide a context for interviewees and help to ground data in a real setting.

Eg. keep a diary which questions will be based around.

53
Q

Groupthink?

A

A phenomenon in which individual opinions become subsumed into that of the group;
a dominant member unduly influences the group.

54
Q

Questionnaires. (3 points)

A

Clearly worded questions important when no researcher is present to encourage and resolve ambiguities.

Well-designed questionnaires are good at getting answers to specific questions from a large group of people, especially if spread across a wide area so infeasible to visit all.

Can combine with other methods, eg. information from a small number of interviews might be corroborated by sending questionnaires to a wider group.

55
Q

Choice between questionnaires and structured interviews.

A

Similar questions - choice depends on motivation of respondent to answer:

  • If high enough to answer without anyone else present, a questionnaire is cheaper and easier.
  • If they need some persuasion, better to use face-to-face.

Eg. structured interview is easier and quicker where people won’t stop to complete a questionnaire, eg. train station, walking to a meeting.

Telephone interview lies between the two.

56
Q

Developing questions for a questionnaire.

A

Can be harder than for interviews - no-one available to explain ambiguities.

Important questions are specific; where possible use closed questions with a range of answers (inc. ‘no opinion’, ‘none of these’).

Negative questions can be confusing and can lead to false information, but sometimes a mix of -ve and +ve questions can help check users’ intentions.

57
Q

Questionnaire structure.

A

Many start with basic demographic information and details of relevant experience. Useful for putting responses into context, eg. web experience - different perspective may be due to experience level.
(Only relevant contextual info needed).

Specific questions that contribute to data gathering goals usually follow.

If long, may be subdivided into related topics so easier and more logical to complete.

58
Q

General advice checklist for questionnaire design. (4)

A

Think about question order - the impact of a question can be influenced by question order.

Consider if different versions are needed for different populations.

Provide clear instructions on how to complete. Eg. say if you want a check put in one box.

Balance between white space and keeping compact. Long questionnaires cost more and deter participation / completion.

59
Q

Question and response formats.

A

Check boxes and ranges.

Rating scales:

  • Likert scales
  • Semantic differential scales
60
Q

Check boxes and ranges. (3 points)

A

Range for demographic questionnaires predictable.

People may not like giving exact age, so ranges often given. (Avoid overlaps).

Intervals don’t need to be equal - depends on what you want to know. Eg. life insurance - most of target population working age 21 - 65.

61
Q

Use of rating scales. (3)

A

Purpose is to elicit a range of responses to a question that can be compared across respondents.

Good for getting people to make judgements about things, eg. how easy or usable etc. - important for usability studies.

Likert most common as identifying suitable statements that respondents will understand is easier than identifying semantic pairs that people interpret as intended.

62
Q

Likert scale.

A

Identify a set of statements representing a range of possible answers. eg 1 - 5, strongly agree, etc.

Used for measuring opinions, attitudes and beliefs, and consequently they are widely used for evaluating user satisfaction with products.

63
Q

Design steps for Likert scale. (3)

A
  1. Gather a pool of short statements about the subject, eg. brainstorming with peers.
  2. Decide on the scale:
    - How many points
    - Discrete or continuous
    - How to represent scale.
  3. Select items for final questionnaire and reword to make clear.
64
Q

Guidelines for Likert.

A

Number of points on scale?
Small (3) if possibilities limited (Y / N / Don’t know)
Medium (5) for judgements with like/disike, agree/disagree.
Longer (7) for subtle judgements like UX dimensions, eg. level of appeal.
Odd - clear central point; Even - no fence sitting.

Discrete or continuous?
Boxes for discrete choices, scales for finer judgements.

Order?
Positive (Strongly agree) first - matches the way people think about scoring.

65
Q

Semantic differential scales.

A

Explore bipolar attitudes about a particular item - each pair of attitudes represented by a pair of adjectives, with a (single) cross placed between them. (Clear —– Confusing).

Score for evaluation found by summing scores for each bipolar pair. Scores can then be computed across groups of participants.

Poles can be mixed so good a bad features are on the right or the left.

66
Q

2 important issues for administering questionnaires.

A

Reaching a representative sample of participants

Ensuring a reasonable response rate.

67
Q

Comment on questionnaire population / sample size.

A

Large surveys - potential respondents need to be selected using a sampling technique.

In ID small numbers are common (

68
Q

Online questionnaires (general).

A

Becoming more common - can reach large numbers quickly and cheaply.

2 types - email and web-based.

69
Q

Email questionnaires.

A

Can target specific users, but likely to be simply an electronic editable version of paper-based one - loses advantage of web-based (unless points to web-based).

70
Q

Web-based questionnaires advantages. (3)

A

Can be interactive - check boxes, menus, help screens, graphics, etc.

Can provide immediate data validation - enforce rules, eg. ‘select only one’, numerical answer only, etc.

Faster response rates and auto-transfer of responses into a database for analysis.

71
Q

Web-based questionnaire disadvantage.

A

Problem obtaining a random sample of respondents - can’t identify size and demography of full population being surveyed, and traditional sampling methods can’t be used.

Means respondents are self-selecting, so results can’t be generalised to offline populations.

72
Q

Convenience sampling?

A

Includes those who were available rather than those selected using scientific sampling.

73
Q

Steps for designing web-based questionnaires. (6)

A
  1. Devise as if paper-based.
  2. Develop strategies for reaching target population.
  3. Produce an error-free interactive electronic version from the original paper-based one.
  4. Make accessible from all common browsers and readable from different sized monitors and different network locations.
  5. Ensure info to identify each respondent can be captured and stored confidentially.
  6. Thoroughly pilot test:
    - review by knowledgeable analysts
    - typical user with think-aloud
    - small version attempted
    - final check for small errors
74
Q

Possible effects of online instead of paper-based. (2)

A

People may be more revealing and consistent, eg. drinking habits, rating a teacher; may feel less social pressure when more anonymous.

More potential for manipulating structure (eg. use of drop-downs), which can influence answers.

75
Q

Response rate bias.

A

Low response rate, and potential accompanying bias, can be an important issue.

If a small proportion respond to an unsolicited survey, it might be a certain type of person, eg. proactive, and answer in a bias way.

76
Q

Stratified sampling.

A

Technique in which the structure of the sample reflects the structure of the population.
Eg. 60/40 % men/women - sample: 6 men, 4 women.

77
Q

When are observations used?

A

Useful at any stage of development.

Early - help users’ context, tasks and goals.

Later - (eg. evaluation) - may use to investigate how well the developing prototype supports these tasks and goals.

78
Q

How observations take place.

A

User may be observed directly by an investigator as they perform activities, or indirectly by records read afterwards.

May occur in the field, or in a controlled environment:
Field - individuals observed doing day-to-day tasks in a natural setting.
Controlled - specific tasks in eg. a lab.

79
Q

Benefits of direct observation in the field.

A

Hard to explain details of tasks etc. in interview / questionnaire; observation in the field helps fill details and nuances missed.

Provides a context for tasks; contextualising users and the product provides important info about why activities happen the way they do.

80
Q

Danger of observation in the field.

A

Can be complicated and result in a lot of data that’s not very relevant, if not planned and carried out carefully.

81
Q

Important points for doing observation in the field.

A

Particularly important to have a clearly stated goal as there is so much going on.

Respond to changing circumstances, eg. unexpected meeting relevant to goal.
Need a balance of being guided by goals and being open to modifying, shaping or refocusing study as you learn about the situation.

82
Q

When to stop observing?

A

Schedule may dictate; otherwise stop when no longer learning new things - similar behaviour patterns, understand all main stakeholders’ perspectives.

83
Q

Simple framework for observation in the field.

A

Look for:

The person - who is using the technology at any particular time?

The place - where are they using it?

The thing - what are they doing with it?

84
Q

Degree of participation - observation in the field.

A

Where a study fall on the spectrum depends on goals and practical / ethical issues.

Passive observer (outsider) - takes no part in the study environment.

Participant observer (insider) - attempts to become full member of the group.

85
Q

Passive observer (comment)

A

Difficult to be truly passive in the field, as you can’t avoid interacting with activities around you.

86
Q

Participant observer. (comments)

A

Can be difficult as observing requires a certain level of detachment, while full participant assumes a different role.

Important to keep role separate, so notes are objective, while participation is also maintained.

May not have skills to be fully involved, the group may not want you to, or there may not be time.

87
Q

Other decisions for observations. (5)

A

Level of participation to adopt.

How to record data.

How to gain acceptance in the group.

How to handle sensitive issues, eg. cultural, access to private space.

How to ensure study uses different perspectives (people, activities, job roles, etc.).

88
Q

Ethnography.

A

Traditionally - researcher immersed themselves long-term into the environment.

ID uses an ethnograhical approach through workplace site visits, and ethnographical techniques like (in)direct observation of users while engaged in work activities.

89
Q

Distinguishing feature of ethnographic studies.

A

Aims to observe situation without imposing any ‘a priori’ structure or framework, and to view everything as ‘strange’.

90
Q

Ethnography and ID.

A

Popular - if products are to be used in a wide variety of environments, we need to know the context and ecology of those environments.

Can uncover real desires, insight into lives, stories and interests - discover how products can fit intuitively into peoples’ lives.

91
Q

Role of observer in ethnographic studies.

A

Adopts a participant observer (insider) role as much as possible.

Technique may be used along with informants from the community, interviews with members and study of community artifacts.

92
Q

Gathering ethnographic data.

A

Not hard - gather what is available, what is ‘ordinary’, what people do, say and how they work.

Data collected has many forms: documents, own notes, pictures, room layout sketches.

Data gathering is opportunistic - often interesting phenomena only emerge later on.

93
Q

Things that may be collected / recorded in ethnographic studies. (5)

A

Activity or job descriptions.

Rules and procedures said to govern particular activities.

Talk between parties and informal interviews about observed activities.

Photos / videos of artifacts and descriptions.

Workflow diagrams.

94
Q

Direct observation in controlled environments.

A

Most common in a lab, and during the evaluation stage.

More formal (than in field), and user may be apprehensive.

Good to prepare script (like with interviews).

95
Q

Emphasis of field and lab studies.

A

Same basic data recording techniques (notes, photo, video, etc), but used differently.

Lab - emphasis on details of what people do;
Field - context is important and focus is on people’s interactions, technology and environment.

96
Q

Think-aloud technique.

A

Participant asked to verbalise their thought process during data gathering.

Silence is a problem - you don’t know what they’re thinking or looking at.

2 people working together can work as it is more natural and revealing as they help each other along.

97
Q

Indirect observation methods. (2)

A

Diaries.

Interaction logs.

98
Q

Diaries?

A

Regular record of activities - what they did, when, how hard, reactions to the situation, etc.

99
Q

Advantages of diaries. (4)

A

Don’t take up much resource.

No special equipment or expertise.

Suitable for long-term studies.

Templates can be created online to standardise entry format and enable data to go straight to a database.

100
Q

Disadvantages of diaries. (2)

A

Need people to be reliable and remember to complete them - may need incentives, and process should be easy.

Memories often exaggerated - better / worse, more / less time.

101
Q

Interaction logs.

A

Software records users’ activity in a log to examine later.
Eg. mouse / key presses, time spent in help and task flow.

Logging number of website visitors common - can help justify maintenance and upgrades.

102
Q

Advantages of interaction logs. (2)

A

Unobtrusive (but ethical issues over tracking without knowledge).

Large volumes of data can be logged automatically (but powerful tools needed to analyse).

103
Q

Dilemma of interaction logs.

A

Should we tell people we’re watching them? - users may object or change their behaviour.

Depends on personal info collected and how used.

104
Q

Data types that may be collected, depending on study goal (gathering technique should be compatible). (4)

A

Implicit knowledge or explicit, observable behaviour.

Opinion or facts.

Formal documented rules or informal work-arounds and heuristics.

Publicly accessible info or confidential.

105
Q

Task being investigated will have dimensions that influence techniques. Task categorised by 3 dimensions:

A

Sequential steps or rapid overlapping series of subtasks?

Involves a lot of info and complex displays, or little info and simple representations?

Performed by a layman or trained professional?

106
Q

Factors affecting choice of data-gathering techniques to combine. (4)

A

The focus of study.

Participants involved.

Nature of technique.

Available resources.

107
Q

How participants involved affect choice of data-gathering techniques.

A

Characteristics of target user group - children / adults; in a hurry; if their job involves interaction.

Location and accessibility of participants - focus group may be impractical.

Time needed for full attention - interview requires high level of engagement, observation allows normal activity.

108
Q

How nature of technique affects choice?

A

Special equipment or training needed?

Do available investigators have appropriate knowledge and experience, eg. at ethnographic studies, or handling video data.

109
Q

How do available resources affect choice of data-gathering technique.

A

Eg. nationwide questionnaire -time, money, people to do good design, pilot, issue, collate / analyse results.

110
Q

Interviews for data gathering for requirements.

A

Good at getting people to explore issues, and semi/un-structured interviews often used early on to elicit scenarios.

For requirements, equally important for development team members to meet stakeholders and for users to feel involved - motivation for interviews.

111
Q

Focus groups and data gathering for requirements.

A

Good at gaining a consensus view an highlighting areas of conflict and disagreement during requirements activitiy.

Social level - helps for stakeholders to meet designers and each other, and to express views in public.
One set of stakeholders may not be aware their understanding of an issue / process is different, even if they’re in the same organisation.

112
Q

Focus groups and workshops.

A

Focus groups tailored for requirements activity, and requirements workshops now popular.

Workshops have been developed to have significant structure (some evolving from JAD).
Attendees carefully chosen; specific deliverables are produced.

113
Q

Questionnaires and data gathering for requirements.

A

Initial responses can be analysed to choose interviews or get a wider perspective on particular issues that have arisen elsewhere.

Eg. questions on impression / opinions on current product; or opinions / views about specific suggestions for features that would be most appreciated.

114
Q

Direct observation and data gathering for requirements.

A

Observation in natural setting to understand nature of tasks and the context in which they’re performed.

Observation my be by trained observer, or sometimes by or with a member of the design team.

115
Q

Indirect observation and data gathering for requirements.

A

Diaries and interaction logging used less often for requirements.

116
Q

Studying documentation and data gathering for requirements.

A

Good source for steps involved in an activity and any regulations governing a task.

Shouldn’t be only source - everyday practice may augment them and may have been devised to make procedures work in practice.

User-centred - interested in everyday practices, rather than idealised account.

Good for understanding legislation and getting background information on work, without taking up stakeholder time.

117
Q

Researching similar products and data gathering for requirements.

A

Helps prompt requirements.

Eg. image editor for mobile device - looked at PC image software to understand features and interaction offered.

118
Q

Contextual inquiry.

A

Popular approach to establishing requirements which emphases the important of context.

Apprenticeship model - designer works as an apprentice to the user.

Most typical format: contextual interview - combination of observer, discussion and reconstruction of past events.

119
Q

4 main principles of contextual inquiry.

A

Context principle - important to go to workplace to see what happens.

Partnership principle - Developer and user should collaborate in understanding the work.

Interpretation principle (not included in UB) - observations must be interpreted in order to be used in design (in cooperation with user and developer).

Focus principle - data gathering focused on goals.

120
Q

Data gathering guidelines for requirements. (4)

A

Focus on identifying the stakeholder’s needs.

Involve all stakeholder groups.

One representative from each stakeholder group is not enough, especially if the group is large.

Support sessions with props, eg. task descriptions and prototypes.

121
Q

Benefits of group interviews. (2)

A

More complete picture of requirements - others supplement missed parts.

Quicker rationalisation of requirements - multiple people can help resolve many initial errors and inconsistencies while together.

122
Q

Contextual inquiry (UB description).

A

A structured field interviewing method, based on a few core principles that differentiate it from plain, journalistic interviewing.

More a discovery process than an evaluation process; more like learning than testing.

123
Q

How is contextual inquiry different from field observations and interviews.

A

Partnership creates dialogue - interviewer can determine not just user’s opinions and experiences, but also their motivations and context.

Interviewer needs to be part of the user’s world (often long-term) to be effective - users will become more at ease and revealing.

124
Q

When to use contextual inquiry? (3 points)

A

When you really need to understand the users’ work context - the environment can really influence how people use a product.

Also good for finding out about work practices in domains you know nothing about.

Best used early in development stages, since a lot of info gained is subjective - how people feel about their jobs, how work or info flows through the organisation, etc.

125
Q

4 techniques that have a user-centred focus and are used to understand user’s goals and tasks.

A

Scenarios.

Use cases.

Essential use cases.

Task analysis.

All may be produced as a result of data gathering sessions, and their output used as props in subsequent data gathering sessions.

126
Q

Brainstorming in ID, inc 2 key success factors.

A

Used to generate, refine and develop ideas.

Widely used in ID specifically for generating alternative designs or for suggesting new and better ideas for supporting users.

Key factors for success:

  • Participants should know the users’ goals that the product is to support.
  • No ideas should be criticised or debated.
127
Q

Task description.

A

More recent emphasis on involving users earlier in development - task descriptions are used throughout development.

3 common ones: scenarios, use cases and essential use cases - can be used to describe either existing tasks or envisioned task with a new product.

Often used in combination to capture different perspectives or to document different stages during development.

128
Q

Scenarios.

A

Informal and richly contextual narrative descriptions of either current or envisioned use of the interactive product or of a particular activity.

Doesn’t explicitly describe use of software / technology to achieve tasks.

Using language of users means stakeholders can understand and participate fully.

129
Q

Why are scenarios useful?

A

Telling stories is a natural way to explain what you’re doing or how to achieve something, and so the focus is likely to be what users are trying to achieve - their goals.

Understanding why people do things as they do and what they’re trying to achieve allows us to concentrate on the human activity rather than interaction with technology.

Understanding what people do now is a good starting point for exploring constraints, contexts, irritations, etc., under which they operate.

Repeated reference to a particular form, book, behaviour or location indicates it’s somehow central to the activity and we need to understand it and its role.

130
Q

Detail level in scenarios.

A

Depends on where in development they are used.

Requirements - good to emphasise:

  • context
  • usability and UX goals
  • tasks the user is performing
131
Q

Use cases.

A

Describe a sequence of actions / activities comprising the interaction between a user and the interactive product.

Aims to capture the associated actor’s goal in using the system.

132
Q

Use cases compared to scenarios.

A

Also focus on user goals, but the emphasis is on a user-system interaction rather than the user’s task itself.

Stress is still very much on the user’s perspective, not the system’s.
‘Scenario’ in this context represents one path through the use case - one particular set of conditions.

Layout is more formal, and as focused on user-system interaction rather than users’ activities, a use case presupposes technology is being used.

133
Q

When are use case useful?

A

The kind of detail is more useful at conceptual design than in requirements or data gathering, but found to help some stakeholders express their views on how existing systems are used, and how a new system might work.

134
Q

2 basic steps to develop a use case.

A

Identify actors - people or systems that will interact with the new system.

Examine these actors and identify their goal(s) in using the system - each will be a use case.

135
Q

Essential use cases?

A

Describe the interaction between a user and the system in terms of user intentions and system responsibilities.

Aim to overcome the ‘limitations’ of scenarios and use cases.

136
Q

What are the ‘limitations’ of scenarios and use cases that essential use cases aim to overcome.

A

Scenarios - concrete stories that concentrate on realistic and specific activities. Therefore, they can obscure the broader issues concerned with a wider, organisational view.

Traditional use cases - contain assumptions, eg. that there is a piece of technology to interact with; and also assumptions about the UI and the kind of interaction to be designed.

137
Q

Essential use cases compared to scenarios and use cases.

A

Essential use cases represent abstractions from scenarios - a more general case than a scenario embodies, and try to avoid the assumptions.

Steps are more generalised than a use case and more structured than a scenario.
Eg. ‘supply required info’ - nothing about choosing options or system prompts - just a simple statement. Could be achieved in a variety of ways.

Instead of actors - user roles. A user role is not a particular person or system, but a role that a number of different people may play when using the system.

138
Q

3 parts that make up the structured narrative of an essential use case?

A

A name that expresses the overall user intention. Eg. ‘retrieveVisa’.

A stepped description of user actions. Eg. ‘find visa requirements’.

A stepped description of system responsibility.
Eg. ‘request destination and nationality’.

139
Q

Task analysis?

A

Umbrella term that covers techniques for investigating cognitive processes and physical actions, at high abstraction level and in minute detail.

140
Q

Task analysis use.

A

Mainly to investigate an existing situation, not to envision new products.

Used to analyse underlying rationale and purpose of what people are doing - what they’re trying to achieve, why they’re trying to achieve it, and how they’re going about it.

Info gleaned establishes a foundation of existing practices on which to build new requirements or to design new tasks.

141
Q

Hierarchical task analysis (HTA)

A

Involves breaking a task into subtasks, and again, etc.

These are grouped together as plans that specify how the tasks might be performed in an actual situation.

Focuses on physical and observable actions that are performed, and includes looking at actions that are not related to software or an interactive product at all.

Starting point is a user goal - this is examined and the main tasks associated with achieving that goal are identified. These may be divided into subtasks.

142
Q

Affinity diagrams purpose.

A

An effective way of organising customer data.

Good for making sense of a lot of data from customer research and encourages full team participation in the development of a customer centred product definition.

143
Q

Affinity diagram description.

A

10 to 100 statements connected to the product identified by the team from each customer interview.

Each statement can be interpreted to identify underlying need.

Affinity diagram - simple tool to organise these customer needs hierarchically, based on clustering and group discussion.

Most importantly - helps build team affinity with customer requirements.

144
Q

Method for affinity diagrams.

A

Assemble the team.

Write all the customer statements on individual post-it notes.

Group the statements.

Name each group.

Cluster the groups.

145
Q

What should a designer have at the end of the requirements phase to inform their understanding?

A

Understanding of the user characteristics, eg. from personas.

Set of scenarios of users’ existing behaviour, and envisioned behaviour using the product.

Set of use cases of users’ interactions with the envisioned behaviour using the product. (May be derived from scenarios).

Set of essential use cases describing user intentions and system responsibilities (may be derived from use cases).

(Possibly) an analysis of small tasks into subtasks.

Set of requirements, perhaps using Volere shell. Might be functional or non-functional, the latter being informed by personas and scenarios.

Set of specific usability and UX goals.
Usability goals should be accompanied by usability criteria (exact nature of which determined by asking / watching potential users).
UX goals might also be accompanied by quantified criteria.