How the Web team used the ‘top tasks’ approach to prioritise

The Web team wanted to find out why people were coming to coop.co.uk which is the start page for many other Co-op products and services within Co-op Funeralcare, Food, Legal Services and Insurance. Not knowing why people were coming to the homepage made prioritising our work difficult. To help with this, we recently did a piece of research into which tasks people want to complete when they visit the site. 

At this point, I was new to both user research and the Web team so this was a brilliant introduction and overview of my new team’s scope. 

Our colleagues in Insurance and Funeralcare suggested we use the ‘top tasks’ approach by Gerry McGovern which aims to help teams find out where they can add the most value to users based on which tasks are in the highest demand. The idea is to: 

  1. Identify the tasks users want to complete most often – these are the ‘top tasks’.  
  2. Measure how the top tasks are performing by looking at – amongst other things – how long they take to complete in comparison to a baseline timeWhether users completed the task and whether they followed the online journey we’d expected  
  3. Identify where improvements could be made. Make them. 
  4. Repeat. 

How we identified the top tasks  

image

Through analytics, a ‘what are you looking to do today?’ feedback form on the homepage, plus having a dig around the domain during desk research, we compiled a list of around 170 tasks that are possible to complete on coop.co.uk.  

To make sure the work that followed this was free from bias and therefore meaningful – it was important to compile a comprehensive list of every single task. A complete list meant we had a better chance at finding out what the customer wants to do rather than what the organisation wants the customer to do. 

First wshared the list with the rest of the Web team to sense check. Then, because we knew that customers would skimread the list when we put it in front of them, we asked the Content design community to check we’d written each task in a way that customers would understand quickly, using the language we’ve observed them using.   

After finessing, we shared with product teams in Co-op Digital and stakeholders in the wider business to make sure we hadn’t missed tasks off.  

Collaborating helped us whittle the lists of 170 tasks down to 50 – a much more manageable number to present to customers on the homepage. 

And the 6 top tasks are…

We listed the top 50 tasks in an online survey on the homepage and asked users to vote on the top 5 reasons they come to the website.  

At around 3,000 responses we took the survey down. The results showed that the most common reasons people visit coop.co.uk is to: 

  1. Select personalised offers for members. 
  2. Look for food deals available to all customers (£5 freezer fillers, fresh 3). 
  3. Check your Co-op Membership balance. 
  4. Find what deals are available in a store near to you 
  5. Choose a local cause. 
  6. Add points from a receipt onto a Co-op Membership account. 

There were no surprises here then. 

Measuring the performance of the top tasks

In the majority of cases, we found that users succeeded in completing the tasks. This doesn’t come as a surprise because each individual product team knows why their group of users most frequently use their product or service and they already have product and research prioritise in place. 

However, this piece of work did flag up that there could be room for improvement in the following tasks: 

  1. Sign into a membership account. 
  2. Change your local cause. 
  3. Add points from a receipt onto a Co-op Membership account.  

image (1)

The image above shows how long it took on average for users to see their membership balance, choose an offer, choose a local cause and to add a receipt. The orange line shows how long we’d expect it to take. The graph shows that checking a balance is quicker than we’d expected but the remaining 3 are slightly longer.

image (2) 

The image above shows how how ‘successful’ users were at seeing their membership balance, choosing an offer, choosing a local cause and to adding points to their membership from a receipt. A ‘direct success’ (shown in green, the bottom band of colour) is when the user completes the task in the way we’d expect. An ‘indirect success’ is when a user completes a task in a way we didn’t expect (show in orange, or the top band). 20% of people failed to choose an offer (shown in red at the top of the second column).

 

image (3)

The image above shows an ‘average overall score’ (where 10 is excellent and 1 is poor) and is worked out by combining the ‘success score’ (a scale of 1-3 indicating a direct success, indirect success or failure) plus a ‘difficulty score’ (a scale of 1-7 on how difficult the user found the task to complete). 

The idea came from Measuring and quantifying user experience, a post from UX Collective. 

What we learnt 

The big takeaways were: 

  1. There are a couple of tasks we didn’t think were important, but users did.  
  2. The work also helped us optimise our search. The feedback form on the homepage which asked what customers wanted to do had a significant number of responses looking for our gluten-free (GF) fishcakesThis was a result of Coeliac UK including them in a roundup of GF products. But thewerent on our site. And when people were searching for them, the search would return GF recipes. The Web team worked with the Optimisation and search team and now the GF products appear before recipes. Since then, there’s been a 70% increase in GF searches, and more pages are being looked at. People coming for GF products are now spending 2 minutes on the site – an increase of 30 seconds. 
  3. However, the top tasks approach may be more useful for teams with transactional services so that measuring it a baseline and improvements would be easier – the Web team itself doesn’t have any transactional services. 

Top tasks approach: how useful?

Overall, top tasks is useful because it gave us data that is helping the Web team prioritise, and set out my research priorities.  

The priorities list will keep us focussed and it’ll be useful to point to if there’s a request to work on something that we’ve found to have little value. Now we have data to help us push back against requests that don’t have customer and member needs at the centre. 

Now and next 

The Web team has created a task performance indicator for some of the top tasks identified so that as we make improvements to different areas of the website, we have something to measure against. 

If you’ve used the top tasks approach, let us know why and how useful you found it in the comments. 

Kaaleekaa Agravat
User researcher  

12 things we learnt about creating effective surveys

At Co-op Digital we sometimes use surveys to get (mostly) quantitative feedback from users. They’re quick, cheap and they’re a useful research technique to capture data at scale.

But they can also be a waste of time and effort if we do not ask the right questions in a way that will give us meaningful answers.

We’ve compiled a list of things we keep in mind when we’re creating surveys.

Strive for meaningful data

1. Be clear on the purpose of the survey

We consider what we want to be able to do as a result of the survey. For example, when we’ve collated the responses, we want to be able to make a decision about doing (and sometimes *not* doing) something. To create an effective survey, we must know how we’ll act on each of the things we learn from it.

We give our survey a goal and consider what we want to know, and who we need to ask, then draft questions that help us achieve that goal.

2. Make sure each question supports the survey’s purpose

We keep in mind what we want to do with the survey responses, and make sure each question we ask is relevant and presented in a way that will return meaningful data from participants. If we can’t explain how the data we’ll get back will help us, we don’t ask that question.

We know that the more questions we ask, the more time we ask of our users, and the more likely they will be to drop out.

3. Check a survey is the most appropriate format

It can be tempting to cram in as many questions as possible because it’ll mean we get lots of data back. But quantity doesn’t translate to quality. Consider the times you’ve rushed through a long survey and justified giving inaccurate or meaningless answers just to get through it quickly. When we find ourselves wanting to ask lots of questions – especially ones with free text boxes – a survey isn’t the most appropriate format to gather feedback. An interview might be.

Consider what we’re asking and how we’re asking for it

4. Use free text boxes carefully

Free text boxes which allow people to type a response in their own words can be invaluable in helping us learn about the language that users naturally use around our subject matter.

But they can also be intimidating. The lack of structure means people can get worried about their grammar and how to compose what they’re saying, especially if they have low literacy or certain cognitive conditions. For many people they can be time-consuming, and so can make drop out more likely.

If we use free text boxes, we make them optional where possible. It can also increase completion rates if they’re positioned at the end of the survey – participants may be more invested and so more likely to complete them if they know they’re near the end.

5. One question at a time

Be considerate when you ask questions. To reduce the cognitive load on participants, reduce distraction and ask one question at a time. Ask it once. Clearly. And collect the answer.

Questions like ‘How do you use ‘X’, what’s good about it, what’s bad?’ are overwhelming. And then giving a single box to collect all 3 answers, often end up collecting incomplete answers. A participant will quite often answer only one of those 3 questions.

6. Ask questions that relate to current behaviour

People are not a good judge of their future actions so we don’t ask how they will behave in future. It’s easy for a participant to have good intentions about how they will, or would, behave or react to something but their answer may be an unintentionally inaccurate representation. Instead, we ask about how people have behaved, because it gives us more accurate, useful and actionable insights. “The best predictor of future behaviour is past behaviour,” as the saying goes.

7. If we ask for personal information, we explain why

Participants are often asked for their gender, sex, age or location in surveys but often nothing will be done with that data. If there’s no reason to ask for it, we don’t.

When there is a valid reason to ask for personal information, we explain why we’re asking.

For example, in the Co-op Health app we ask for an email address so that we can send essential service updates. Without explaining why we were asking for it, many people were reluctant to give their email because they thought they were going to get spam. By explaining the reason we were asking, and how the information will be used, the user was able to decide whether they wanted to proceed.

Explaining why we’re asking for personal information is essential in creating transparent and open dialogue. It gives users the context they need to make informed decisions.

8. Avoid bias. Show the full picture

Give participants context. For example, an online survey might reveal a snippet of text for a limited amount of time in order to find out how well participants retained the information. If the survey results say that 90% of people retained the information, that’s great but it doesn’t necessarily mean that was conclusively the best way to present the information – that’s only one of the possible ways of presenting the text. In these cases it’s better to do a multivariant test and use multiple examples to really validate our choices.

Be inclusive, be considerate

9. Avoid time estimates

Many surveys give an indication of how long the survey will take to complete. Setting expectations seems helpful but it’s often not for those who with poor vision, dyslexia or English as a second language. It also rarely takes into account people wo are stressed, distracted or are emotionally affected by the subject matter. Instead, we tend to be more objective when setting expectations and say how many questions there are.

10. Don’t tell participants how to feel

A team working on a service will often unthinkingly describe their service as being ‘quick’, ‘easy’, ‘convenient’ or similar. However, these terms are subjective and may not be how our users experience the service. We should be aware of our bias when we draft survey questions. So not, ‘how easy was it to use this service?’, which suggests that the service was easy to begin with, but ‘tell us about your experience using this service’.

11. Consider what people might be going through

Often, seemingly straight-forward questions can have emotional triggers.

Asking questions about family members, relationships or personal circumstances can be difficult if the user is in a complex or non-traditional situation. If someone is recently separated, bereaved or going through hardship, they could also be distressing.

If we have to ask for personal information, we consider circumstances and struggles that could make answering this difficult for people. We try to include the context that these people need to answer the question as easily as possible.

12. Give participants a choice about following up

Sometimes survey answers will be particularly interesting and we may not get all the information we want. At the end of the survey, we ask participants if they’d be happy to talk to us in the future.

We also give people a choice about how we follow up with them. Some people may be uncomfortable using a phone, some may struggle to meet you face to face, some may not be confident using certain technologies. Ask the user how they want us to contact them – it’s respectful, inclusive and is more likely to encourage a positive response.

When choosing who to follow up with, avoid participants that were either extremely positive or negative – they’ll can skew your data.

Time is precious – keep that in mind

At the end of the day, when people fill out a survey, they feel something about your brand, organisation or cause. They may like you or they may just want their complaint heard. Sometimes, they’re filling out your survey because they’re being compensated. Whatever the reason, view it as them doing you a favour and be respectful of their circumstance and time.

If you’ve got any tips to share, leave a comment.

Joanne Schofield, Lead content designer
Tom Walker, Lead user researcher

How contextual research helped us redesign the replenishing process in our Food stores

Every day, in every Co-op Food store, a colleague does a ‘gap scan’. They walk around the store, they spot gaps on the shelves, and they scan the shelf label with a hand-held terminal. This generates a ‘gap report’ which tells the colleague which products need replenishing. It also flags other tasks, such as which items need taking off the shelves because they should no longer be sold.

This is an essential stock management process in our stores. It ensures:

  • stock we’re low on is ordered automatically
  • customers can get the products they need
  • our stock data is accurate

However, the process is complicated. There’s an 18-page user manual explaining how to do it and on average, gap reports are 25 pages long. 

Making the essential less arduous

In the Operational Innovation Store team, we aim to simplify laborious processes in stores. Product owner and former store manager Ross Milner began thinking about how we might tackle ‘gap’, as store colleagues call it. 

He started by asking some questions:

  • How might we design a process so intuitive our store colleagues don’t need a manual? 
  • How might we help colleagues complete all the priority actions from the report immediately? 
  • How might we save 25 pieces of paper per store, per day – in other words, 22 million sheets per year? 

Learning from users

I’m a user researcher and this is the point where I joined the project. My first research objective was to discover how store colleagues go about the process at the moment, and what they find good and bad about it. To do this, I visited 5 stores. I interviewed the managers about their process – as it’s a task which usually falls to them due to its current complexity – but most importantly, I observed how they use the gap reports.

Adapting what they had to meet their needs

Being there in person in the back offices in stores gave me a far deeper insight than I would have got had I done phone interviews, or even just spoken to colleagues on the shop floor. 

Being there gave me access to reams of old gap reports stashed in the back office. It was invaluable to see how colleagues had adapted them to better meet their needs. Some of the things I saw included:

  • dividing the stack of pages into easily-managed sections
  • highlighting the information that requires action
  • ignoring all the non-actionable information on the report – some users didn’t even know what the information meant
  • changing printer settings to save paper
  • ticking off products as they complete the actions against them 

Photograph of one page of a gap report. Several numbers are highlighted. Not particularly easy to understand.

Seeing the physical artefact in its context revealed a lot of needs we might have otherwise missed, because colleagues are doing these things subconsciously and most likely wouldn’t have thought to mention them to us.

Learning from prototypes

Our contextual research has helped us identify several unmet needs. Delivery manager Lee Connolly built a basic prototype in Sketch and we mocked up a digitised gap reporting process. The design clearly separated and prioritised anything that needed store colleagues to take action. We arranged those tasks in a list so they could be ‘ticked off’ in the moment, on the shop floor.

Screenshot of an early prototype used for scanning labels on shelves

This was intended as a talking point in user interviews and the feedback was positive. The store managers were fascinated, asking when they’d be able to use it, and – unprompted – listing all the benefits we were hoping to achieve, and more.

Developing ‘Replen’: an alpha

We’d validated some assumptions and with increased confidence in the idea, we expanded our team to include a designer and developer so we could build an alpha version of the app. We call this app ‘Replen’ because its aim is to help colleagues replenish products when needed.

Interaction designer Charles Burdett began rapid prototyping and usability testing to fail fast, learn quickly and improve confidence in the interface. It was important to do this in the store alongside colleagues, on the devices they normally use. We wanted to make it feel as realistic as possible so users could imagine how it would work as a whole process and we could elicit a natural response from them. 

photograph of possible interface on a phone in front of co-op food store shelves

Profiling stores so we know where we’re starting from

Before we could give them the app, we needed to understand each trial store’s current situation, so that we’ll be able to understand how much of a difference Replen has made. We visited all the stores we’re including in our trial. Again, being physically there, in context, was vital. 

The following things have an effect on the current gap process and may also affect how useful Replen is for colleagues. We noted:

  • the store layout and the size of their warehouse
  • whether the store tends to print double-sided
  • where managers had created their own posters and guides to help colleagues follow the gap process
  • any workarounds the stores are doing to save time and effort

Screen Shot 2019-07-01 at 16.25.04

What’s next for Replen?

We’ve just launched the Replen alpha in our 12 trial stores.

The aim of an alpha is to learn. We’re excited to see whether it meets user needs, and validate some of the benefits we’ve been talking about. We’re also keen to see whether stores continue using any workarounds, and whether cognitive load is reduced.

We will, of course, be learning this by visiting the stores in person, observing our product being used in real life, and speaking to our users face to face. When redesigning a process, user research in context is everything. 

Rachel Hand
User researcher

Field research: designing pre-paid plans with Funeralcare

This week, the design team held a show and tell to discuss 2 questions:

  1. What is design?
  2. Why should you care?

If you couldn’t make it, we’re writing up some of the examples from different areas of design that we talked about and we’re posting them on the blog this week. They’re aimed at Co-op colleagues whose expertise are in something other than a digital discipline.

Today we’re looking at how we used field research when we were designing a digital form with Funeralcare colleagues who arrange pre-paid funeral plans in our branches. (You can also make a pre-paid funeral plan online).

Buying a pre-paid funeral plan: how the paper forms process works

Here’s how the process tends to work:

  • a client rings a local branch to make an appointment
  • the client goes into a branch
  • a colleague and the client fill out a lengthy paper form together
  • the client pays at least a deposit to their local branch
  • 3 copies of the paper form are made – one for the client, one is kept in branch and the other is sent by post to head office which often takes 7 days
  • a colleague at head office manually copies the information from the paper form into a customer relationship management system
  • the form is dug out on the request of the client’s family after their loved one has died

The process is expensive, time-consuming and as with all human processes, there is room for error.

What we wanted to achieve

We wanted to create a more efficient, easy-to-use service. We wanted to connect the computer systems that are already being used in Co-op Funeralcare branches and integrate them directly with the customer relationship management system colleagues use in head office.

Where to start?

What we knew was limited. We had an idea what the start of the process was for clients and colleagues because we knew what the paper form looked like. We also had sales data from the very end of the process. But in order to improve efficiency and ease of use, we needed to know a lot more about how things are working in between these 2 points.

For both colleagues and clients we wanted to get a clearer picture of:

  • what a plan-making appointment was like (both practically and emotionally)
  • the paper form filling process
  • whether there were frustrations with the process and where they were

We arranged some site visits for our ‘field research’.

Learning from field research

We visited Co-op Funeralcare branches.

Green image with white copy that says: The approach. Get out of the office to learn and test

Why? Because when people feel at ease they’re more likely to open up and speak honestly. For this reason we spoke to our funeral arranger colleagues in a context they’re familiar with – in the rooms where they regularly create plans with clients. Talking to them here helped them relax, and because they weren’t in a place where their seniors might overhear, they were less guarded than they might be if we brought them into head office.

Seeing mistakes happen, figuring out why they happen

Talking to them was good but seeing colleagues fill out the paper plans was invaluable because we could observe:

  • the order they approached the questions
  • whether they made mistakes and where
  • if and where they used any common work-arounds where the form didn’t meet their needs

All of this helps us see where we can improve the design.

Feeding observations into the design

When we were talking through the paper forms with arrangers, they told us they often found there wasn’t enough space to capture a client’s personal requests. Because they’d come up with a reasonable work-around, it might not have been something they would have mentioned to us if we hadn’t been there, in their office, looking at the forms together. Being there helped us make sure we didn’t miss this. They showed us examples of when they had worked around a lack of space by attaching an extra sheet to the paper form they were submitting.

In the example below the client has requested to be dressed in ‘Everton blue gown with frill’ and they’ve been very particular about the music before, during and after the service.

Every funeral is different – just like every life they commemorate and the paper form didn’t accommodate for the level of detail needed. The work-around they’d come up with wasn’t hugely painful but good design is making processes pain free. We fed our observations back to the digital team and designed a form that allowed for individuality. It has bigger open text boxes to record more detail as well as including drop downs and free text boxes for music on the day.

Paper versus digital forms

The benefits of moving across to digital forms include:

  1. Having easier access to more data, for example, numbers on couples buying together and numbers on people buying for someone else. This is useful because we can direct our efforts into improving the experience where the most people need it. 
  2. Saving time for colleagues who manually copy paper plans to the head office system. Digital plans are sent directly to system and are instantly visible to colleagues in head office.
  3. Reducing the number of errors in paper plans. Common mistakes include allowing people over 80 to spread their payment over instalments and the client’s choice of cremation or burial not being recorded. The design of the digital form doesn’t allow arrangers to progress if there are mistakes like these.
  4. A significant yearly saving on stamps used to send paper forms from a branch to head office.

Field research helped get us to this point

We’re now testing the new digital forms in 15 branches. We’ll be rolling them out to more and more branches over time but we’re starting small so we can iron out any cracks.

So far, the feedback from colleagues is positive. But without observing colleagues in context, there’s a certain amount of assumption about the way they work on our part. Field research contributes to the fact the pre-paid funeral plan service is design-led.

If everyone shares an understanding of the benefits of being design-led, it’ll be easier for experts from around the business to work together to deliver value to Co-op customers, colleagues and the Co-op as a business. If you didn’t make the show and tell but would like to find out more, email Katherine Wastell, Head of Design.

Gillian MacDonald
User researcher

How we turn research into actionable insights

One of the main challenges for us as researchers is making our findings more actionable for the rest of the team, particularly during the discovery phases when we’re conducting exploratory research.

At least initially, early stage research can bring more ambiguity than clarity, throw up more questions than answers and we often end up with challenges and problems that are too broad to solve.

As researchers, it’s our responsibility to use research methods that will facilitate good design and product decisions. It’s not enough to just do the research, we need to help translate what we’ve learnt for the rest of the team so that it’s useful.  

How we did it

We’re working on a commercial service. Our team’s remit was to find out what would make our service different because, in theory, if we can solve unmet customer needs, we can compete in a saturated market. A successful product or service is one that is viable, feasible and desirable.

This post covers 3 techniques we’ve recently tried. Each one helped us reduce ambiguity, achieve a clearer product direction and get a better understanding of our users, their behaviours and motivations.

1.Learning from extremes

When we’re testing for usability or we’re seeing how well a functional journey works, we usually show users a single, high fidelity prototype. However, earlier on in the design process, we put very different ideas in front of users so we can elicit a stronger reaction from them. If we only showed one idea at that point, their reaction is likely to be lukewarm. It’s when we elicit joy, hatred, confusion for example that we learn a lot more about what they need from a product.

In this instance, we wanted to uncover insight that would help us define what might make a more compelling product.

We identified the following problem areas:

  1. Time – people don’t have much of it.
  2. Choice – there is so much.
  3. Inspiration – people struggle with it.

Instead of prototyping something that would attempt to improve all 3 of these problem areas as we would do when testing usability, we mocked up 3 very different prototypes – each one addressed just one of the problems.

The extreme prototypes helped users better articulate what meets their needs and what might work in different contexts. It wasn’t a case of figuring out which version was ‘best’. We used this technique to test each idea so we could find out which elements work and therefore include them in the next iteration. It also started informing the features that the experience would be comprised of.

Overall though, it helped us reach a clear product direction which gave us a steer in our next stage of research.

2.Doing a diary study

A diary study is a great way to understand motivations and uncover patterns of behaviour over a period of time. We recently invited a bunch of urban shoppers to keep a diary of how they were deciding what to eat at home.

We asked them to use Whatsapp, partly because it was something they already used regularly but also because its quick, instant messages reflect the relatively quick amount of time it takes for someone to make a decision about what to eat. The decision is not like choosing which house to buy where you might think about and record decisions carefully in spreadsheets, so it would be difficult for people to reflect on their ‘what to eat’ decisions retrospectively in interviews. Whatsapp was a way to get closer to how choices are made so we could better understand the context, behaviour and decision itself.

The engagement was much higher than we expected. We captured lots of rich data including diary entries in text, video and photo format. We didn’t ask for or expect the visuals but they were very useful in bringing the contexts to life for our stakeholders.

When we looked for patterns in the data, we found that nobody behaved in the same way every day, or over time. However, we were able to identify ways people make choices. We called them ‘decision making modes’. We looked at the context in which people made decisions and the behaviour we’ve observed. Each mode highlighted different pain points, for example, they may have leftovers to use up. This enables us to prioritise certain modes over others, get alignment as a team on who we’re solving problems for, and think about features to help address some of the pain points for users.

3.Using sacrificial concepts

‘Sacrificial concepts’, a method developed by design company IDEO, allow us to gain insight into users’ beliefs and behaviour. We start with reframing our research insights as ‘How might we…?’ questions that help us find opportunities for the next stage of the design process.

For example, we found that buying groceries online feels like a big effort and a chore for shoppers because of the number of decisions involved. So we asked: “How might we reduce the number of decisions that people need to make when they shop online?”

We did this as a team and we then create low fidelity sketches or ‘concepts’ that we’re willing to sacrifice that we can put in front of users.

Just like when we test extremes, the purpose of testing those ideas wasn’t to find a ‘winning version’ – it was to provoke conversation and have a less rigid interview.

Sacrificial concepts are a fast and cheap way to test ideas. No-one is too invested in them and they allow us to get users’ reaction to the gist of the idea as opposed to the interface. They give us a clearer direction on how to address a problem that users are facing and they are a good way to make research findings more usable in the design process.

What’s worked for you?

Those are the 3 main ways we’ve approached research in the early phase of one particular commercial Co-op service. We’d like to hear how other researcher and digital teams do it and their experiences with the techniques we’ve talked about.

Eva Petrova
Principal user researcher

We’ve added user research guides to the design system

We recently added 4 user research guides to our Co-op design system. The guides cover:

  • how to plan and prepare for research as a team
  • how to choose the most appropriate research method, and how to use it
  • how to analyse your findings, turn them into something actionable and how to share with the rest of the team
  • a list of useful research tools

We’re committed to user-centred design. We start small, we test for user value and we work iteratively – research and reacting to feedback is vitally important to us.

But it’s not easy to do good research and by ‘good’ we mean using the appropriate method and ensuring the way we do it is planned, thorough and unbiased.

You need skilled researchers.

Helping teams help themselves

We have a superb small team of researchers at Co-op Digital. We have varying background, skills and strengths which means asking for advice on how to tackle something is always interesting and useful. But we can’t cover all our projects, at all product phases, all the time. There aren’t enough of us.

So in a few cases, we set the direction and encourage teams to do their own research, with us there as support.

Sharing the knowledge

The idea came while I was writing a research strategy for a team working on a particular scope of work. I realised the strategy could be adapted into more of a ‘how to do research at the Co-op’ guide. For years, in an unofficial, internal-channels-only type way, several researchers had been writing guides on things like ‘how to recruit users / gather informed consent / write a survey’. It made sense to pull this useful work together and make it open and available in our design system.

Presenting guidance in this way means that instead of individual researchers writing a strategy for a team now and then, we can give more general advice.We want to make sure people are doing good, useful research in the right way and we can now add value to any digital team by giving them a ‘best practice’ resource.

We’re working on it

As always, the plan is to iterate and add more guidance as we go. We’ve been looking towards the GDS service manual as an excellent, detailed resource for planning research.

As we come across a method that we don’t have a guide for, we’ll write one up. For example, the next time one of our researchers needs to conduct a diary study they’ll write that up.

We know we need to improve how we help people choose the appropriate method so that people don’t just fall back on conducting usability testing in a lab or face-to-face interviews. As Vicki Riley says in her post, matching our research approach to the project is really important.

We’d like your feedback on it too so if you have any, leave a comment.

Simon Hurst
Lead user researcher

From digital design manual to design system

In January 2017 we released our digital design manual. Now, 18 months later, the design manual has evolved into a design system.

Although it’s been live for months, it’s still (and always will be) a work in progress. We’re sharing it now in line with one of our design principles: ‘we design in the open’.

You can see the Co-op Digital design system at coop.co.uk/designsystem

Evolution of the design manual

The aim of the design manual was to help teams release things faster so they could focus on user needs rather than on making basic design decisions. We iterated and added new pages as and when there was a need, for example, we added guidance on forms, guidance on tables and our secondary colour palette.

But a year after its release, we were at a point where more of our digital services were going live, so we revisited the design manual and asked if it could be more useful.

What we learnt from our users

We asked our design, content design and user research community how well they felt the guidance in the design manual was serving its purpose. Feedback was mixed but most people felt that it didn’t quite cover enough.

A workshop made it clear that users wanted:

  • example-driven patterns
  • guidance on when to use specific design and content patterns
  • examples of ‘experimental’ patterns
  • all guidance in one place

Afterwards, we dedicated time to making some major changes to the content as well as the navigation and layout.

Design system – nice for what?

We found lots of excellent examples of design systems in our research but good, solid design systems are good and solid because they’re unique to the organisation or business they belong to – they meet the needs of designers, content designers and researchers who work there.

The Co-op Digital design system includes our:

  • pattern library
  • content style guide
  • guidance on our design thinking
  • design, user research and content design principles
  • tools (front-end and prototyping kits)
  • resources (Sketch files and brand assets)

Most importantly it’s a living document. Like all good design systems, ours will never really be ‘finished’ but it’ll evolve as our teams and services do. Over the past 6 months we’ve established processes that allow our team members to contribute to the system.

We audited our existing design work and looked for similarities and opportunities to create familiarity. We’ve also spent a lot of time building the foundations for a stronger and more collaborative team through workshops, design crits and making sure we design in the open.

Familiarity over consistency

The Co-op is an organisation with very distinct businesses which all need to communicate with Co-op members, customers and users in an appropriate and relevant way. For example, the way we communicate with a customer in a food store is likely to be very different to how we speak to a customer in a funeral home.

So it’s likely that our services might feel different. And that’s ok, as long they feel familiar.

A design system lets us create this familiarity. It should lead to a much more unified experience when they interact with different Co-op services.

Pattern library

We’ve started creating a library of design patterns – this is the most significant addition to our previous guidance. It doesn’t replace our design guidelines, it just pulls out the useful stuff we learnt designers look for when they’re designing a service. 

Each pattern will have:

  • an example, ie, a visual example of the pattern
  • an associated user need
  • design guidance, ie, how you use it
  • accessibility guidance

Our colour palette pattern is a good example.

The library will be the de facto standard for how we display certain types of information.

Anyone at Co-op can contribute by submitting their pattern to the design community. They can do this by filling in a form justifying why users outside their service might benefit from this pattern or, why what they have created is an improvement on a current one.

Evolution of the design system

We want to continuously improve the guidance designers are looking for. To help us do this we’ll speak to more of the external teams that work with us and invite our colleagues in the Brand and Marketing teams to contribute their own guidance. We’ll also put the system to the test with teams as they build more Co-op services.

Watch this space.

Jack Sheppard
Matt Tyas