Quantitative Research Methods Flashcards

(73 cards)

1
Q

What is Quantitative Research?

A

Quantitative research in UX focuses on collecting numerical data to measure what is happening, how often, and to what extent. It is highly structured and often used to validate patterns across large sample sizes for statistical reliability.

The study of what can be measured and observed.

The results will be consistent and generally agreed on by all parties involved.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are some examples of quantifiable and qualitative human traits?

A

Height and weight are quantifiable measurements, as they can be counted and measured against a standard scale.

Personality traits like “nice” or “funny” are subjective.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are quantitative measurements?

A
  • Bounce rates
  • Time on task
  • Conversion rates
  • Order size (number of items or their value)
  • Number of visitors to a site (physical or digital)
  • Average size of group
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the most common aspects of quantitative research and what do these common aspects help with?

A

Mean (average value), median (middle value), mode (most common value), and range (difference between the highest and lowest values)

They are helpful for a high-level understanding of averages and trends.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a major limitation of quantitative research?

A

It doesn’t tell us how to fix things, why things happen, or share information that isn’t specifically asked for.

Quantitative UX research alone does not explain the why behind issues—it only shows the what and how often.

It lacks human context and does not capture intent, emotion, or environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Despite its limitations, how can quantitative research be useful?

A

It can act as a benchmark for future studies and for qualitative research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does quantitative research (e.g., analytics, usage metrics) tell you?

A

It tells you what is happening.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What kinds of insights does quantitative research show?

A

It shows behavior patterns, frequency of events, drop-offs, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What can happen if you rely only on quantitative metrics?

A

Metrics can be misleading without understanding how and where a product is being used.

Fixes based solely on numbers may be superficial or entirely misdirected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why should quantitative data always be complemented with qualitative methods?

A

To understand the root cause and context of user problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the three main focuses of research that inform product design work?

A

Insight-driven, evaluative, and generative research.

More commonly called planning, discovery/exploration, and testing/validation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do these research focuses impact product design beyond initial ideation?

A

They influence maintenance and support goals for ongoing projects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Is design and iteration finished once a product ships?

A

No, design and iteration continue after launch, especially with the ease of pushing updates in digital products.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What mindset should product teams maintain after shipping a product?

A

Always be looking to measure and improve the products delivered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does insight-driven research seek to understand?

A

It seeks to understand what the problem space is, why the problem exists, and where opportunities lie.

It’s not about measuring how a solution performs, it’s about discovering what problem to solve and what success could look like.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

When is insight-driven research typically conducted?

A

In the early stages of projects.

When it’s used:
- Before solutions are designed
- When you’re skill learning about users, behaviors, pain points, or market gaps
- To identify opportunities and unmet needs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a simple example of insight-driven research?

A

Looking at the rate at which users succeed when attempting to accomplish their own goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How does insight-driven research appear in quantitative research?

A

As benchmarks and initial KPIs (e.g., success rates, task completion, drop-off points)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are some insight-driven research methods?

A
  • Qualitative interviews
  • Diary studies
  • Contextual inquiries
  • Behavioral analytics (early signal exploration)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the purpose of evaluative research?

A

To assess how well a solution works against goals, benchmarks, and KPIs that were identified.

To measure how a design or solution performs against established KPIs and benchmarks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

When is evaluative research conducted?

A
  • During and after design implementation
  • To validate if solutions are usable, useful, and effective
  • To refine and iterate on designs21523
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the quantitative benefit of Evaluative Research?

A

Can show improvements at scale (e.g., % reduction in errors, time on task)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What does evaluative research help determine about a user flow?

A

How effective the flow is, both before and after proposed changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Can evaluative research use both quantitative and qualitative methods?

A

Yes, it may include both types of measurements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What advantage does quantitative evaluative research provide?
It can pinpoint opportunities for improvement on a larger scale than qualitative research.
26
What is the relationship between Insight-driven research and Evaluative research?
Insight-driven -> uncovers what to solve Evaluative -> measures how well you solved it One informs and supports the other. They are complementary, not competing.
27
What is Generative Research?
(also called Data-Driven design) helps create and inspire new ideas, concepts, and design directions through research. It's used to understand users deeply so that teams can generate solutions that are grounded in real needs, behaviors, and motivations. help UX teams explore new design possibilities by uncovering deep user needs, goals, and pain points. This research fuels data-driven design by blending objective insights with measurable patterns or trends. Though primarily qualitative, generative research can be informed by quantitative data to identify areas of opportunity. It focuses on creating the right solution—not just testing an existing one.
28
At what stage of the design process is generative research conducted?
In the early phase, before defining solutions.
29
What Are System Analytics?
System analytics (or site analytics for web experiences) are one of the most common forms of quantitative UX research. These tools provide passive, ongoing data collection about user behavior, system usage, and interaction flows.
30
Why are System Analytics valuable?
- Low cost of entry: Many platforms like Google Analytics, Mixpanel, or Hotjar offer free or low-cost plans. - Passive data: Once properly implemented (i.e., correct event tagging and tracking setup), they run in the background, collecting data at scale. - Breadth of data: They provide large-scale insights that are difficult to gather manually.
31
How does System Analytics support Insight-driven research?
- Highlighting where users struggle - Revealing usage patterns and drop-off points - Surfacing unexpected behaviors at scale - Informing hypotheses for further qualitative investigation
32
What are user flows?
Paths users take through your product.
33
How are user flows used in the context of system analytics?
To identify navigation patterns, drop-offs, or shortcuts.
34
What are Entry Points?
How users access your experience (homepage, search, direct link, etc.)
35
How are Entry Points used in the context of system analytics?
Understand product discovery and funnel efficiency
36
What are Search Logs?
Terms users input in your search field
37
How are Search Logs used in the context of system analytics?
Reveal natural mental models, unmet needs, and poor navigation/taxonomy
38
What are Demographics?
Age, gender, location, device, etc. (depending on consent and data availability)
39
How are Demographics used in the context of system analytics?
Ensure product aligns with target user segments
40
What are Surveys?
Surveys are proactive research tools used to gather direct input from users. Unlike analytics, which passively collect data based on user behavior, surveys allow you to ask users directly about their intentions, needs, experiences, and satisfaction.
41
Why Surveys Are Versatile?
Surveys are uniquely positioned because they can serve both: 1. Insight-driven: Learn about user expectations, goals, intent, unmet needs. 2. Evaluative: Assess satisfaction, usability, NPS, post-task impressions, CSAT, etc.
42
What are common Survey Goals?
1. Understand user intent "What brought you here today?" 2. Evaluate Usability: "How easy was it to complete this task?" 3. Measure Satisfaction: "How satisfied are you with your experience?" 4. Assess expectations vs. Reality: "Did this experience meet your expectations?" 5. Capture Net Promoter Score (NPS): "How likely are you to recommend us to a friend or colleague?"
43
What are the key strengths of Surveys?
1. Direct user voice: You're not guessing based on behavior, you're hearing it firsthand. 2. Scalable: Can be deployed broadly for statistical significance. 3. Targeted: Can be triggered by specific actions or events (e.g., cart abandonment, feature usage).
44
Let's say analytics show 60% of users abandon the checkout page. Analytics tell you: There's a problem. What can a post-abandonment survey reveal?
It can uncover the reason why users abandoned the checkout—such as unexpected costs, confusing forms, or lack of trust—providing context that analytics alone cannot show. Together, these methods give you both the “what” and the “why.”
45
What Is Tree Testing? aka, Tree Jacking or Reverse Card Sorting
Tree testing is primarily an evaluative UX method used to measure how users navigate and understand a proposed information architecture or taxonomy. Designers input a hierarchy into a testing tool, and users are asked to complete tasks by selecting where they'd look for information. The data collected—such as success rates and click paths—helps identify where the structure aligns with or deviates from user expectations. While not inherently generative, tree testing can inform new navigation structures through repeated iteration.
46
How does Tree Testing work?
1. A designer or researcher inputs the proposed hierarchical structure (i.e., site map or menu) into a tree testing tool. 2. Users are given realistic tasks (e.g., “Where would you go to change your billing address?”). 3. Users complete the tasks by navigating through the text-based tree structure—no visual design or UI distractions. 4. The system tracks: - Success rates - Directness of path - Time to complete - Drop-offs or wrong turns
47
Why is tree testing considered evaluative in nature?
Because it measures how well users understand and navigate an existing or proposed structure.
48
How can tree testing also inform generative decisions?
When combined with other insights, it can guide how to redesign a taxonomy.
49
What Is Eye Tracking?
Eye tracking is a method of measuring where and how users look at a screen while interacting with a digital interface. It uses specialized cameras and software to track a participant’s eye movements in real time, capturing: 1. Fixations (where the eyes stop) 2. Saccades (quick jumps between fixations) 3. Scan paths (the visual journey across the interface) 4. Heatmaps (aggregated visual attention across users)
50
For what types of screen-based digital products is eye tracking especially valuable?
E-commerce websites, landing pages, product detail pages, navigation-heavy platforms, and interactive dashboards or tools.
51
What does eye tracking reveal in UX research?
Visual hierarchy effectiveness, CTA visibility, label clarity, distracting or ignored elements, and the difference between user attention and the intended design path.
52
What Is A/B Testing?
A/B testing is a controlled experiment in which two or more versions of a webpage, screen, or element are shown to users at random. The goal is to compare performance and identify which version performs better based on predefined KPIs. It falls under site/system analytics as a quantitative evaluative method.
53
How does A/B testing work?
1. Create a hypothesis 2. Design two versions - Version A: Original (Control) - Version B: Modified (Variant) 3. Split users randomly 4. Measure KPIs 5. Analyze Statistical Significance (Use tools like Google Optimize, Optimizely, or VWO to determine if differences are meaningful.)
54
What are the benefits of A/B testing?
- Data-driven decision making - Low-risk experimentation - Micro-optimization of UI/UX - Easy to implement with tools
55
What are limitations of A/B testing?
- Retrospective: A/B testing only tells you what worked, not why it worked - No user intent captured: It lacks insight into user motivations, emotions, or unmet needs - Required high traffic: For meaning results, especially for small changes - Single-variable focus: Complex UX problems often involve multiple layers that A/B can't isolate - Can lead to local optima: You may optimize a small part of the experience without improving the whole
56
What is A/B testing ideal for?
Testing micro-interactions like CTA wording or image choice.
57
What must guide effective A/B testing?
A clear hypothesis and alignment with established KPIs.
58
What limitation does A/B testing share with other system analytics?
It provides a backward-looking view of user behavior and does not capture user intent or subjective experience.
59
What Is Card Sorting?
Card sorting is a generative UX research method used to understand how users categorize information. Participants are asked to group topics, labels, or features into logical categories based on their own understanding.
60
What does card sorting help designers and researchers do?
Create or refine information architecture, define navigation menus, improve labeling (taxonomy), and uncover user mental models.
61
What are the three types of card sorting?
1. Open card sorting: Participants create their own categories and group items freely 2. Closed Card Sorting: Participants sort cards into predefined categories provided by researcher 3. Hybrid: Combines both open and closed- Users can use existing categories or create new ones.
62
What does Card Sorting reveal?
1. Natural language use 2. User expectations 3. Preferred groupings 4, Inconsistencies in content structure or labels.
63
What is a Moderated product testing?
Conducted in-person or live online with a facilitator. Enables deeper probing into decisions.
64
What is a Unmoderated product testing?
Validating a product with a researcher setting up questions for a customer to respond to at a later time. Mostly done remotely using digital tools
65
What is Intercept Testing?
A method of randomly requesting customer feedback as they engage with a product
66
When are quantitative research methods most effective?
When a large number of participants or customers are assessed for statistically significant outcomes.
67
Can quantitative methods be useful in smaller studies?
Yes, methods like eye tracking and card sorting can show value with only a handful of participants.
68
When are quantitative methods best used?
When the research question has a tangible, measurable outcome. - You need statistically significant insights. - You’re evaluating behavior at scale. - You're trying to measure performance, trends, or changes over time. - Your research question has clear, measurable outcomes.
69
Example Research Questions for Quantitative Methods:
“What percentage of users abandon the checkout process?” “How many users click on the ‘Learn More’ button?” “What is the average time spent completing a task?” “How does Version A of a landing page perform compared to Version B?” These types of questions are ideal because they are objective, countable, and can be tied directly to UX KPIs (e.g., conversion rate, completion time, error rate).
70
What are some key shortcomings of quantitative methods?
They may not effectively capture user motivation or task comprehension. 1. Lacks context and motivation: It tells you what happened, but not why. It doesn't capture user emotions, intent, or decision-making processes 2. Required sufficient data: Tools like analytics and A/B testing rely on large sample sizes for statistical significance. With limited users, results can be misleading or inconclusive. 3. Not helpful in early-stage innovation: If you're developing a product for a new market, there may be no behavioral data to analyze yet. In these cases, generative or qualitative methods are more effective 4. Doesn't reveal comprehension or usability pain: If users drop off during a process, analytics might show where they dropped, but not whether it was due to confusion, frustration, or unmet expectations. 5. Hard to explore unknowns: Quantitative methods measure what you already know to look for. They don't uncover new problems, unmet needs, or hidden opportunities the way open-ended methods do.
71
When are qualitative methods more effective than quantitative methods?
When trying to understand user motivations, intentions, and how users comprehend a task.
72
When should quantitative methods be complemented by qualitative research?
1. When you need to explore user motivations or thought processes. 2. When you're building something new with little existing user data. 3. When you have limited access to users or small sample sizes. 4. When you're exploring early concepts, designs, or task comprehension.
73