Small is beautiful. User research and sample sizes

At the Co-op, we use both qualitative and quantitative approaches to make decisions about products. This post is about doing qualitative research with a small-ish numbers of users – and why that’s useful.

As a rule of thumb, qualitative and quantitative approaches are useful for different things:

  • If you want to understand the scale of something (such as how many users do X or Y, or how much of something is being used), use quantitative methods, like surveys.
  • If you want to understand why people do something and how they do it, qualitative methods such as interviews or seeing how users behave with a given task (user tests) are better.  

User research isn’t a one off event. It’s a process. By researching with a handful of users at a time, iteratively, and supported by data on user behaviour, we build better digital products and services.

How many users are enough?

We don’t need to observe many users doing something to identify why they’re behaving a certain way. Jakob Neilsen, a usability expert, found through research with Tom Landauer that 5 users is sufficient. More than 5 and your learning diminishes rapidly and “after the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new”. Here’s Neilsen’s graph of these diminishing returns:

Graph shows percentage of usability problems found on the y axis and number of test users on the x axis. the graph sows that we find 100% of usability problems with a relatively small number of test users.

Source: Jakob Neilsen

Analysing user data and user research findings are complementary in developing digital products and services. Data can help identify issues to then test with users, but it can also run the other way. In user research at the Co-op, we’ll often see things while doing user research which we’ll then investigate with data. It works both ways.

Screen Shot 2017-03-09 at 11.26.18

There’s cumulative value in cycles of research

The cycle of user research shown in the diagram is how product teams work at the Co-op. We typically iterate in weekly or fortnightly cycles.

For example, the Membership team has a rhythm of fortnightly cycles. These are often focused on discrete aspects of Membership. These research cycles accumulate learning over time. They create an understanding of Membership and of users needs. Cumulatively, this gives clarity to the whole user journey.

During the last 10 months, the Membership team have surveyed 674 users and interviewed 218. The value of this research accrues over time. The team has learnt as they’ve developed the service and iterated on the findings, getting to know far more than if they’d done the research in one block of work.

That’s why observing relatively few users doing a task, or speaking to a handful of users explaining something they’ve done, is enough to provide confidence in iterating a product and to continue to the next test. This is especially true when user research is used together with data on user behaviour and even more so when it’s done regularly to iterate the product.

Error-prone humans are there in quantitative research too

It’s not uncommon for people to give more weight to quantitative data when they’re making decisions. Data is seen as being more factual and objective than qualitative research: “you only spoke to 10 people, but we have data on thousands…!”

Data hides the error-prone human because humans are invisible in a spreadsheet or database. But even though they’re hidden, the humans are there: from the collection of the data itself and the design of that collection, to the assumptions brought to the interpretation of the data and the analysis of it.

All data is not the same

Survey data, based on responses from users, is distinct from data collected on behaviour through Google Analytics or MixPanel. Poor survey design produces misleading insights.

Getting useful behavioural data from a user journey is dependent on setting up the right flows and knowing what to track using analytics software.  Understanding what constitutes ‘good’ data and how to apply it is something we’re working on as a community of user researchers at the Co-op.

Research is a process, not a one-off

Digital product teams usually have a user researcher embedded. They can also draw on the skills and experience of the conversion and optimisation team and their quantitative and statistical skills and tools. The user researcher gets the whole product team involved in user research. By doing this, they gain greater empathy for and understanding of their users (and potential users).

This diagram shows some of the methods we use to help us make good product decisions and build the right thing to support users:

Screen Shot 2017-03-09 at 11.36.18

As user researchers our craft is working out how and when to deploy these different methods.

Part of the craft is choosing the right tool

Let’s take an example from a recent project I was involved in, Co-op wills, where we used both quantitative and qualitative research.

We had customer data from the online part of the service and analysed this using a tool called MixPanel. Here’s part of the journey, with each page view given a bar with corresponding number of visitors:

Screen Shot 2017-03-09 at 15.38.11

From this, we could determine how many users were getting to a certain page view of the wills service, and where they were dropping out.

The data let us see the issue and the scale of what’s happening, but it doesn’t give us a sense of how to resolve it.

What we didn’t know is why people were dropping out at different parts of the journey. Was it because they couldn’t use the service, or didn’t understand it, or because they needed information to get before they could complete?

To help us understand why people were dropping out, we used user data to create hypotheses. One of our hypotheses was that “users will be more likely to complete the journey if we start capturing their intent before their name and email address” ie, show them the service before asking them to commit.

Through user research with small numbers of users we found a series of different reasons why people were behaving in ways Mixpanel had showed us, from confusion over mirror wills to uncertainty about what the service involved, to requiring more information.

We only got this insight through speaking to, and observing users, and getting this insight allowed us to design ways to fix it.

It’s not an exact science – and that’s OK

Research is not an exact science. Combined with user data, user research is a process of understanding the world through the eyes, hands and ears of your users. That’s why it’s central to the way we’re building things at the Co-op.

James Boardwell
User researcher

9 thoughts on “Small is beautiful. User research and sample sizes

  1. Sian Thomas March 9, 2017 / 5:33 pm

    Interesting. Wondering if you also have a mix of cross sectional and longitudinal as the membership model you work with leads itself to that approach.

    And presumably you use quant to identify participants with characteristics useful for qual.

    Thanks for sharing.

  2. James Boardwell March 10, 2017 / 8:41 am

    Thanks for the questions Sian. The user research product teams are iterating quickly and as such don’t do longitudinal qual work as such (things change fast around discrete behaviours and journeys), but do track key events linked to behaviour over time (via analytics). However, other areas of the business do do classic market longitudinal research e.g. tracking of brand awareness etc.

    And yes, quant does influence the recruitment quota, although it’s mainly at the level of member, lapsed member, customer non-member (where there tend to be more significant behavioural differences) and then demographic data informed by our business insight teams.

  3. plcsystems March 10, 2017 / 10:18 pm

    By quant, are you referring to the person who does the quantitative analysis or the quantitative analysis itself?

    • James Boardwell March 12, 2017 / 7:48 am

      Hello, throughout the post when I say quantitative I’ve mostly been referring to the analysis itself. Is there a specific mention you are querying?

  4. Lisa Duddington April 19, 2017 / 10:40 am

    Very interesting read James and totally agree with your approach. Too many people rely solely on data and guesswork/assumptions without ever speaking with end users. It’s reactive rather than proactive and huge issues can remain uncovered for years!

Comments are closed.