How our users influenced our new forms guidance

The Experience Foundations team recently updated our guidance on forms in our Experience Library.

Diagram of a web form with markers showing where different form elements are place and what they do

Originally, this piece of work was about making sure we included all the components we knew our community needed. But as we got further into the research, we found our community needed guidance on aspects we hadn’t considered.

In the Co-op Customer Products team, we value having the autonomy to be flexible and divert from a plan when we need to. So, with the aim of meeting newly-discovered user needs, we pivoted our work.

A recap: the importance of familiarity in design

Co-op has many business areas and many products and services within them. In most, there’ll be at least one form that, for example, asks a customer for personal details to register for something, or asks for a customer’s payment details so they can buy something. Although our business areas are diverse, it’s important that all of them use a common design language to create familiarity. This means that interactions work in the same way in each service and each one feels like it belongs to Co-op. This helps us build trust with our users.

Starting with research

As always, we started with research. This involved one-to-one conversations with colleagues from a wide range of teams and disciplines to better understand their needs. The conversations helped shape our focus and we ended up with a list of form components that our community needed. Our goal was to design, build and release these components into the Experience Library.

New information = new direction

However, during the conversations, a new theme emerged around the structure and layout of forms.

Although our original research didn’t highlight this as an area of need, feedback from newer members of the community made it clear that this was important but there was ambiguity.

Some of the questions they asked included:

  • What spacing should I use between field sets, labels and buttons?
  • Is it better to use single or double columns for laying out forms?
  • Where should I position buttons?
  • How should I show optional or required fields?

We realised our community needed more than form components and guidance on when and how to use forms – it needed guidance on designing single or multi-page forms from the ground up.

Getting a deeper understanding of the problem

The outcome we were aiming for was for all design colleagues to be comfortable and confident setting up forms for the products and services they look after. So we needed to understand the practices that already existed, and also what change was needed.

Here are 4 things we did to deepen our understanding.

1. Carried out user research

We facilitated conversations with newer members of the design community. We asked questions like:

  • When designing a form, what did you feel unsure about?
  • What guidance did you expect to find in the Experience Library for designing a form?
  • Is there anything else you feel would have helped you in designing a form?

These open questions helped us understand which areas needed clear guidance.

2. Reviewed Co-op forms

When we started the forms work, we reviewed forms across Co-op products and services. We went back to the analysis we did but this time we focused on layout and structure and therefore the usability rather than individual components.

This helped identify variations in form design across Co-op.

3. Analysed other design systems

We looked at the guidance other design systems had on form design. An important take-away was how some design systems used visuals to explain guidance.

4. Revisited best practice

We revisited forms specialists Caroline Jarrett and Adam Silver’s work on forms and considered how it applies to our form design at Co-op.

Designing the ‘Form design’ page

Content designers and interaction designers worked together to define the topics that our guidance should cover. We had some difficult conversations to help us understand different takes on the same topic and often challenged each other’s view. Referring back to the insights allowed the team to have those difficult conversations. We reflected on different perspectives and continually iterated on the content. Through this process we were able to define our stance on things like button positioning. Once we were aligned, we added detail and referenced the insights we’d found in the research.

We also found the need to visualise some of our guidance. For this, we defined a visual language that can be used on diagrams in the future.

Diagram showing how a form in one column is easier to use than a form in 2 columns

We shared early versions of the page with people from the Design, Product and Engineering communities to review. We value different perspectives, and want others to contribute to our work. By designing in the open, our community sees our approach, which helps build trust. Showing them the depth of our process encourages buy-in and the early feedback in the reviews was positive.

A ‘people-first’ design system

Our new Form design page wouldn’t exist without the feedback from our community. We designed it for them, based on conversations we had with them. Delivering guidance that meets their needs shows that we’re listening, we’re collaborative and this builds trust with our colleagues. Our work is less about a page in a design system, and more about the people that use it. We’ll keep listening and iterate when we need to. Like the rest of the Experience Library, this page will evolve with our community’s needs.

Imran Afzal, Lead Designer

Simulating in-store experiences with physical prototyping

The Customer Experience (CX) team has been working with our Co-op Food colleagues to look at how we can improve customer service in our stores. When the CX team help the wider Co-op business solve problems, our process usually involves prototyping. Because we often work in the digital space, our prototypes are often on a screen too.  

This challenge however focuses on in-person experiences in our stores. So, for this piece of work, testing in a physical space and in a more tangible way felt more appropriate. 

Before trialling in a store, we wanted to test our ideas in a low-risk environment where we wouldn’t be in the way of day-to-day store life but where we could still involve colleagues who bring other expert knowledge.  

We used a ‘desktop walkthrough’ method to simulate the in-store experiences. 

We are writing this post to share: 

  • why we chose the desktop walkthrough method as a prototyping tool 
  • how we used it to get a better understanding of our trial logistics 
  • what we learnt about using a less familiar method 

Exploring the problem with a team of experts 

To discover how we can improve customer service in store, we needed to understand the current customer experience and identify pain points.  

We formed a small team of colleagues across Food Operations, Insight and Research, and store managers to help us focus on the right things. Each discipline has its own perspective and involving the right people means we’re more likely to focus on the right things.  

Defining the problem and prioritising 1 concept to tackle

Based on our research, we identified 3 areas we could explore that would help our customers receive (and our colleagues to be able to provide) better service. They were: 

  1. Technology – how might we use new and existing technology to make improvements across different parts of the customer journey? 
  1. People – how might we help our colleagues to prioritise service through training and recognition? 
  1. Insight – how might we make better use of the insight we have on our customers, colleagues and stores to make improvements to customer service? 

We chose to explore the ideas focused on people because we identified the most amount of value, opportunity and feasibility here. We specifically wanted to look at how we might recognise colleagues who were great ‘customer service advocates’ in stores.  

We defined our hypothesis and used it to develop a plan for our trial in a real store. We established the basics of good customer service, and we defined the role of a customer service advocate.  

Choosing an inclusive and lightweight way to test  

To choose the right prototyping method for the scenario, we revisited what we wanted to learn. Our learning objectives were to: 

  • get a shared understanding about the end-to-end customer experience 
  • understand the important interactions between colleague and customer journeys 
  • identify other problem areas so we can address them 

We decided to try a desktop walkthrough because: 

  1. It brings experts from different areas together, in one room, without distraction so we could explain why we had arranged the walkthrough and what we planned to do afterwards in real stores. Each person has a unique perspective and can raise challenges the rest of the group wouldn’t necessarily consider. 
  1. We could figure out our next steps without getting in the way of or taking time away from in-store colleagues. 
  1. We had a hunch it might help us realise things relating to the physical space we otherwise likely wouldn’t have with a different method. For example, shelving and fixtures tend to be tall and make it difficult for colleagues to see each other providing good service.  

The set-up 

As the name implies, the walkthrough takes place at a desk. The Format team shared a generic store floor plan which we printed out and laid on the desk. Then we added 3D card shelving, tills and self-checkouts on top of the paper layout to recreate a mini-scale, realistic-as-possible store. We used figurines to represent colleagues and customers. 

photograph shows 3D card shelving, tills and self-checkouts on top of the paper floor plan
We added cardboard tills, self-checkouts and shelving on the floor plan.

Walking through scenarios 

We chose to walk through common scenarios for store colleagues. For example: 

  • opening the store  
  • navigating around the store at the times when there are fewer colleagues on the shop floor  
  • operational tasks such as unloading deliveries or scanning gaps on the shelves – times where a colleague is less available to directly help customers  
  • customer interaction trade-off scenarios like helping a customer to find an item while being asked over headset to pack a Deliveroo order 
image shows the full floor plan and has figurines at either side that represent customers and colleagues
We grouped customer and colleague figurines around the floor plan as we walked through scenarios.

We also took note of real colleagues’ shifts, lunch breaks and list of tasks too so we could get an idea of how busy the space would be. Weaving this into our walkthrough brought an additional layer of understanding for the people in the room. 

A desktop walkthrough meant we got a bird’s eye view of colleagues moving through our model store for the duration of their shift. It also helped us see where, when or why colleagues interact with customers. 

image shows the back of a colleague figurine facing the store floor [plan and other figurines in the distance
Customer team member number 3 is in the stockroom dealing with a delivery here

Building value for our CX team and the wider community 

Our desktop walkthrough was a quick, cheap way to prepare for an in-store trial. Bringing our ideas to life in this way meant we picked up on things that might not work in stores and we could adapt our concepts without wasting time or money. A lot of this was down to 2 ex-store managers who joined us for the walkthrough – their input was invaluable. Their first-hand experience of working in – and running – stores meant they could sense-check our assumptions which made the scenarios we walked through far more realistic. We made changes to our experiment plan based on their insight and we believe this contributed to the success of our first store trial. 

Since our desktop prototype we have progressed to trialling our customer service advocate concept in stores and continue to learn and adapt. 

Steph Clubb, Lead CX visual designer  

Hannah McDonald, CX strategist 

Managing queues inside and outside Co-op stores during the pandemic 

When the UK Government announced lockdown on 23 March this year, many of our teams pivoted quickly to respond to a new set of priorities. A new team was formed to work out how Co-op Food stores could safely and efficiently manage customer traffic inside stores, and queues outside.  

The team came from different areas of the business so our different expertise meant we had varying priorities and different ways of working.  

Together, we moved quickly to learn about the problem and get something in place as soon as possible. It felt like an essential piece of work that we needed to get right for the sake of our colleagues’ and customers’ safety, as well as the need to keep communities well-fed.  

Here’s how we approached the research and what we learnt during the process.    

Store colleagues self-organised  

We quickly learnt that brilliant colleagues across our 2,000 stores were managing queues themselves – which sometimes might not have been great for their safety and it meant fewer were on hand on the shop floor to help customers. Colleagues were also trying to manage customers’ panicked behaviour.  

We held an ideation session to align ourselves 

Early on in the session, it was clear that all ideas had limitations, but moving quickly and getting value to our colleagues and customers as soon as possible was the most important thing. The team adopted a ‘test and learn’ mindset – even those who were unfamiliar to digital ways of working. By the end of the session, everyone understood how we‘d approach testing these ideas.   

Testing signs from head office in stores 

Stores had already been sent the signs below to display with rules and advice on them.

Unsurprisingly, using signs was one of the ideas that came out of the ideation session but the size, the shape, the positioning wasn’t considered with this specific message in mind. 

We visited different types of stores, at different times, to understand how customers interacted with the signs, to learn what was and wasn’t working. 

Customers didn’t notice the signs   

As they were the signs weren’t enough because: 

  • they didn’t stand out amongst marketing  
  • they were positioned differently at each store and customers   
  • customers were preoccupied by their stressed and/or anxiety and were therefore less observant  
  • we were assuming customers would be actively looking for the information themselves 

Through regular playbacks, we shared photos, quotes and talked through examples with our newly-formed team. Working openly like this meant there was no resistance to trying a different solution.  

Attempt 2: testing digital solutions from external providers   

In a matter of days, we’d written a set of requirements based on what we’d learnt from our observations with the signs from head office; the business had come back with 3 types of digital solutions, from 3 different suppliers that we could now trial in stores.  

These were:  

  • a digital screen attached to the ceiling 
  • a freestanding digital screen 
  • a traffic light system  

 Each of them had:  

  • a sensor/camera to detect customers  
  • a counting system to count customers entering and leaving the store 
  • technology to allow colleagues to manage and override capacity if they needed to
  • a visual prompt to tell customers to stop and enter   

Agreeing success criteria  

Together, our newly-formed team agreed that a good solution:  

  • would count customers accurately  
  • would help store colleagues feel confident and safe 
  • would not significantly interrupt the customer journey 

So we focussed on customer behaviour during our store visits and used surveys to ask colleagues how they felt about the system. The data and number of resets helped us learn about the system accuracy.  

Best of the bunch

The traffic light system best met customer and colleagues needs because: 

  • it’s a system that is instantly recognisable 
  • it could be positioned at eye level 
  • there was no information to read 
  • the sounds and lights worked together 

 And we moved forward with it.  

Testing 2 versions of the traffic light system   

We’d only quickly validated the concept and tested 2 variations of the system in trial stores.   

  1. A freestanding traffic light. 
  2. A traffic light that was fixed to shop doors which opened and closed them.  

Observations told us the fixed door traffic light was most effective because it took the responsibility of deciding whether to enter the store away from customers and helped colleagues enforce rules on numbers in store by automatically stopping too many customers entering.  

Traffic light testing - fixed

Rolling it out to seasonal stores  

Based on our research and new government advice Co-op execs gave the go-ahead to install a fixed traffic light in stores where demand would be highest over the summer. The next focus is to work out all the practicalities of getting it into stores, how it works with current systems and the time it will take to rollout. 

Another team will continue to observe and learn from its performance and take it forward. 

Pandemic. Safety. Urgency.

This piece of work was challenging – we’re working remotely most of the time and visiting stores for contextual research during lockdown which was sometimes nervy. It was exhausting – we were working so quickly, having to communicate well so our new team understood each step and so that nobody felt excluded. But it also felt very worthwhile. At the beginning of lockdown, many teams around Co-op sprang into action, pivoting from roadmaps, hell-bent on supporting colleagues on the frontline. This was one of those projects. Our team added value and learnt a lot. 

Cassie Keith
Content designer

Remote research: Funeralcare’s ‘start an arrangement online’ form

The Co-op Funeralcare homepage includes a section dedicated to arranging a funeral. Within it there’s the option to start an arrangement online. If someone chooses to do this, we ask them to fill in a form to give us details about the person who has died and their relationship to them. They will then get a call back as soon as possible. 

The ‘start aarrangement online’ service is just the first step in the process. The form went live around 2 years ago and iterating it hasn’t been high priority, despite a lot of other changes being made to how Co-op Funeralcare offers its services. Analytics have shown us that the drop-out rate on the form is very high – though it has been difficult to track where people drop out, or whether they ring the number on the page instead 

But, instead of removing the form that appears to be ‘failing’ according to dropout rates, we carried out research because we know: 

  1. There is a small subset of people who use this form who are dealing with unexpected or tragic deaths and it is vital we provide an alternative way to phone calls or in-person meetings for them to start an arrangement. While the number is small, their needs are especially important. 
  2. In principle, taking information online rather than a conversation means colleagues have more time to speak with people who are further into the process who need personalised care and understanding from a human. 

So how we might make the digital experience as simple and pain-free as possible to avoid any unnecessary stress? 

How we recruited for the research sessions 

Our 10 participants were people who had been the primary funeral arranger for funerals that took place between 6 and 12 months ago. We wanted to avoid more distress for anyone who had lost someone within the last 6 months – this seemed too soon. That said, we were carrying out the research during the coronavirus crisis which increased the risk of having recruited someone who had experienced loss recently 

We recruited a mix of ages, gender, tech literacy (we always aim for this), and we also made sure they had used a range of funeral providers 

What we tested

We tested the live version because it had undergone minimal iteration for several years agoBut, we also tested 2 variants which we designed based on what we know now, 2 years on. We wanted the form to feel like something someone could do as much or as little of as they wanted during an emotional time, and we would take care of the restA staged prototype was better at conveying the idea that it is ok for someone to move around the form, and stop when they wanted 

However, the more interesting findings for us were the small improvements we can make to the current form that we feel will make a big impact. 

3 things we learnt  

1. The visual design needs to feel different

We heard that the live form looks ‘cold’ and one participant likened it to a tax return. We’ve been thinking about how we might change it so that it feels warmer and more personal, rather than an intimidating, tick-box-style chore. We also need to balance that with simplicity to reduce the risk of overwhelm. 

2. The content needs to reduce cognitive load for users  

We observed several instances where participants weren’t sure how to answer a question. For example, the form asks “Were they known by another name?” and one participant’s response during the research was “different people called him different things. He had at least 3 nicknames.” They felt the form should indicate why it was asking this and what it might be used for.  

Similarly, there was a feeling that the form needed to prompt people about what information to give in a box that says “Is there anything else you’d like to tell us or think we should know?” One participant pointed out that “you’re not really thinking straight when you fill this in” but they were still keen to add an answer.  

We’ve been thinking about how we might reduce the cognitive load for people by including prompts and examples, and being clear why we’re asking for certain information. It’ll help people give the information we need and – importantly – help them feel reassured and confident they’re doing it ‘right’. The form also asks for information that the Funeralcare team don’t necessarily need at this stage so removing those questions would mean we’re not adding to the load.  

3. The form needs to fit within the service as a whole

We found that participants are often very concerned with ‘getting it right’ and we heard people say “I want it to be perfect”. This understandably comes up a lot because giving someone a good sendoff’ is seen as a way to honour the dead and show respectWe’ve been thinking about how we might reassure people that theyve completed the form correctly and they’re confident they know what happens next – in this case they wait for one of our Funeralcare colleagues to get in touch. One participant said: “I didn’t know what to do… I spent so much time organising that I didn’t have time to grieve.”   

We looked at the form in isolation, but making changes to it will help us collect data and see how it fits within the wider service so we can see how we can simplify or communicate what happens throughout the entire process. 

Remote research: tread carefully, be sensitive (even more than usual)

Death is an emotional subject and research around funerals must always be carried out with acute sensitivityHowever, carrying out research around funerals in the midst of a pandemic has been particularly challenging – doing it remotely makes it harder to pick up on non-verbal communication so we’ve had to tread even more carefully than usual. It’s important to remember that it’s impossible to know what participants are going through, but we worked with an awareness that mortality is at the forefront of many people’s minds. 

To try and put participants at ease we sent instructions for downloading the remote clients and tips for video calls. We also sent over photos of us to introduce ourselves a little better and spent longer than we usually would chatting and easing into the start of each session. 

Over the 2 days of research, we contended with:  

  • technical difficulties – we had to factor in extra time for things going wrong  
  • emotional stress likely brought on by lockdown – for example, pets and children in the same room as the participant 
  • reading emotional signals from the participant and knowing how to guide the session would be interesting 
  • participants realising theyre not prepared for the discussion. We tried to make sure they felt supported and heard despite the session not going the way we had intended

Next up?

We have written up and played back our findings to various stakeholders and are now looking at how we can make measurable improvements to the experience. 

Jamie Kane
User researcher

We’re using ‘behaviour modes’ to keep users at the centre of decisions

Over the past 18 months, the Digital team has been working with the Food business on an online delivery and collection service. During the crisis and lockdown, there has been an increased demand from shoppers wanting to avoid spending time in physical stores. This means our online growth has accelerated significantly in the last few weeks.

Prioritising user needs, always

As designers and researchers, we know we need to keep the needs of users at the centre of the development process. But it’s particularly important right now – at a time when we’re probably working a bit more quickly and iterating faster than normal – to remind ourselves and stakeholders to keep coming back to user needs.

But how do we keep user needs in the forefront of everyone’s minds? We’ve been looking at ‘behavioural modes’. And although we started this work way before lockdown, it feels very relevant to talk about it now.

Personas = ok(ish)  

One technique design teams use to make sure services are user-centred is to create persona documents. These typically describe people segmented by demographics.

For example, Jane is 70 and shops mid-week. Since Jane is an older person, we need to design something that doesn’t rely on technology.

But beware! Demographics become problematic when designing for needs as they introduce biases and assumptions.

At Co-op we have the Insights team and others dedicated to understanding shopper segments – all of which have their own specific purpose and value.

Introducing behavioural modes 

Focusing on behaviours rather than personas helps us:

  • think about the context that people are in
  • think about how decisions are made
  • understand that behaviours can be exhibited by anyone at any given time
  • design the user experience around different behaviours we’ve seen in research

Researcher Indi Young writes in her 2016 post Describing personas that to understand a user’s world and better meet their need, we must be able to empathise with them. And that creating personas for users is not as conducive to this as insights into their behaviours. She writes:

Cognitive empathy requires not a face, not preferences and demographics, but the underlying reasoning, reactions, and guiding principles. Without these you cannot develop empathy. And if you cannot develop empathy, you cannot wield it — you cannot walk in someone’s shoes.

Early on in the project our researcher Eva Petrova helped the team to develop behavioural modes. They were based on what we learned from a diary study looking at how people make choices and decisions around buying food, planning meals and what influences them. (You can read more detail on the diary study in Eva’s post). We’ve subsequently validated the modes with further research to make sure they still hold true.

Here’s our first attempt at creating posters out of our behavioural modes.

photograph of 5 a4 paper printed out attempts at behavioural modes work.

How to communicate behavioural modes

Communicating ideas with everyone who’s involved in the service is challenging at an organisation as big as Co-op.

However, this article by Spotify’s design team describes how they approached a similar challenge to ours:

Representing personas poses a tricky challenge: we want them to be relatable, but they’re not 1:1 matches with real people. Believable human traits and flaws help create empathy with problems and needs. But we don’t want groups to be wrongly excluded based on the characteristics we’ve picked. So finding a balance is a crucial step if we’re to create useful and believable archetypes.

To communicate them, they created an interactive website, shared across Spotify offices through announcements and posters because they “wanted to create fun, playful ways for the teams to incorporate them into their workflows.”

This inspired us. We agreed that to communicate the importance of behavioural modes to the wider teams, we needed to create something that:

  • explained what behavioural modes are
  • avoided making the behaviours seem prescriptive (or like the personas some stakeholders may be familiar with)
  • invited collaboration through wider team’s insights
  • is distinct, memorable and visually engaging
  • doesn’t stray too far from the Co-op style

We brought them to life

We’re lucky to have user researcher Maisie Platts working with us who – as part of her MA studies – had been investigating different ways of visualising personas as part of a design process.

Together with interaction designer Mehul Patel we set out to bring our behavioural modes to life.

Our first route used robots to personify each mode. Robots can convey expression while avoiding any associations with specific demographic characteristics such as age, gender. Here are Maisie’s initial behavioural mode robot sketches.

white paper with 5 blue pen hand-drawn robots

In the initial sketches the robots have screens which display something relevant to each behavioural mode.

However, we decided that the message is lost as it’s relatively small. So, we tried combining simplified versions with playful typography to communicate each behaviour instead. For example, ‘Strive for a balance’ takes the form of weighing scales. Here’s how we explored robots and playful typography.

5 posters with bright colours, playful typography and robot doodles not the focus but incorporated.

It’s OK to push it too far, you can always pull it back

We felt this first visual route had the potential to alienate the very people we were hoping to share the behavioural modes with. The danger arises from the introducing something different, it’s often the case that the unfamiliarity creates uncertainty and an initial reluctance of people to accept it – it’s only natural, and specially so in the case of radical robots!

Next, we tried an illustrative style that was more of an extension of Co-op’s own internal illustrations. This time we featured only hands to achieve the distance from demographic characteristics, while keeping the connection to the representation of ‘real people’.

After feedback and further iteration, we realised that the illustrations felt flat especially given that we’re trying to communicate behaviours – that is to say; an action, a ‘doing’. We introduced a sense of movement to help to bring them to life. For example, the fingers pulling the ribbon and the stack of foods toppling over.

So here’s where we are now – a much more familiar style.

the latest version of the posters in coop style

Before lockdown, we started to print them out as posters and put them up all over the place and to get wider feedback on them. There’s no physical place for that since everyone is remote for the time being, and everyone’s focus has understandably been on reacting to the pandemic.

But here we are, posting them in this digital space.

Each poster has a different behaviour and asks:

  1. Are there any opportunities for your team to target these behaviours?
  2. Do you have any insights or data to share that support these behaviours?

Here’s a close up of the reverse.

close up of the reverse of the poster

I extend these questions to you! Speak to ‘Digital design team in Food’ on Teams or post your comment here. We’d like your feedback.

James Rice
Lead designer

How the Web team used the ‘top tasks’ approach to prioritise

The Web team wanted to find out why people were coming to coop.co.uk which is the start page for many other Co-op products and services within Co-op Funeralcare, Food, Legal Services and Insurance. Not knowing why people were coming to the homepage made prioritising our work difficult. To help with this, we recently did a piece of research into which tasks people want to complete when they visit the site. 

At this point, I was new to both user research and the Web team so this was a brilliant introduction and overview of my new team’s scope. 

Our colleagues in Insurance and Funeralcare suggested we use the ‘top tasks’ approach by Gerry McGovern which aims to help teams find out where they can add the most value to users based on which tasks are in the highest demand. The idea is to: 

  1. Identify the tasks users want to complete most often – these are the ‘top tasks’.  
  2. Measure how the top tasks are performing by looking at – amongst other things – how long they take to complete in comparison to a baseline timeWhether users completed the task and whether they followed the online journey we’d expected  
  3. Identify where improvements could be made. Make them. 
  4. Repeat. 

How we identified the top tasks  

image

Through analytics, a ‘what are you looking to do today?’ feedback form on the homepage, plus having a dig around the domain during desk research, we compiled a list of around 170 tasks that are possible to complete on coop.co.uk.  

To make sure the work that followed this was free from bias and therefore meaningful – it was important to compile a comprehensive list of every single task. A complete list meant we had a better chance at finding out what the customer wants to do rather than what the organisation wants the customer to do. 

First wshared the list with the rest of the Web team to sense check. Then, because we knew that customers would skimread the list when we put it in front of them, we asked the Content design community to check we’d written each task in a way that customers would understand quickly, using the language we’ve observed them using.   

After finessing, we shared with product teams in Co-op Digital and stakeholders in the wider business to make sure we hadn’t missed tasks off.  

Collaborating helped us whittle the lists of 170 tasks down to 50 – a much more manageable number to present to customers on the homepage. 

And the 6 top tasks are…

We listed the top 50 tasks in an online survey on the homepage and asked users to vote on the top 5 reasons they come to the website.  

At around 3,000 responses we took the survey down. The results showed that the most common reasons people visit coop.co.uk is to: 

  1. Select personalised offers for members. 
  2. Look for food deals available to all customers (£5 freezer fillers, fresh 3). 
  3. Check your Co-op Membership balance. 
  4. Find what deals are available in a store near to you 
  5. Choose a local cause. 
  6. Add points from a receipt onto a Co-op Membership account. 

There were no surprises here then. 

Measuring the performance of the top tasks

In the majority of cases, we found that users succeeded in completing the tasks. This doesn’t come as a surprise because each individual product team knows why their group of users most frequently use their product or service and they already have product and research prioritise in place. 

However, this piece of work did flag up that there could be room for improvement in the following tasks: 

  1. Sign into a membership account. 
  2. Change your local cause. 
  3. Add points from a receipt onto a Co-op Membership account.  

image (1)

The image above shows how long it took on average for users to see their membership balance, choose an offer, choose a local cause and to add a receipt. The orange line shows how long we’d expect it to take. The graph shows that checking a balance is quicker than we’d expected but the remaining 3 are slightly longer.

image (2) 

The image above shows how how ‘successful’ users were at seeing their membership balance, choosing an offer, choosing a local cause and to adding points to their membership from a receipt. A ‘direct success’ (shown in green, the bottom band of colour) is when the user completes the task in the way we’d expect. An ‘indirect success’ is when a user completes a task in a way we didn’t expect (show in orange, or the top band). 20% of people failed to choose an offer (shown in red at the top of the second column).

 

image (3)

The image above shows an ‘average overall score’ (where 10 is excellent and 1 is poor) and is worked out by combining the ‘success score’ (a scale of 1-3 indicating a direct success, indirect success or failure) plus a ‘difficulty score’ (a scale of 1-7 on how difficult the user found the task to complete). 

The idea came from Measuring and quantifying user experience, a post from UX Collective. 

What we learnt 

The big takeaways were: 

  1. There are a couple of tasks we didn’t think were important, but users did.  
  2. The work also helped us optimise our search. The feedback form on the homepage which asked what customers wanted to do had a significant number of responses looking for our gluten-free (GF) fishcakesThis was a result of Coeliac UK including them in a roundup of GF products. But thewerent on our site. And when people were searching for them, the search would return GF recipes. The Web team worked with the Optimisation and search team and now the GF products appear before recipes. Since then, there’s been a 70% increase in GF searches, and more pages are being looked at. People coming for GF products are now spending 2 minutes on the site – an increase of 30 seconds. 
  3. However, the top tasks approach may be more useful for teams with transactional services so that measuring it a baseline and improvements would be easier – the Web team itself doesn’t have any transactional services. 

Top tasks approach: how useful?

Overall, top tasks is useful because it gave us data that is helping the Web team prioritise, and set out my research priorities.  

The priorities list will keep us focussed and it’ll be useful to point to if there’s a request to work on something that we’ve found to have little value. Now we have data to help us push back against requests that don’t have customer and member needs at the centre. 

Now and next 

The Web team has created a task performance indicator for some of the top tasks identified so that as we make improvements to different areas of the website, we have something to measure against. 

If you’ve used the top tasks approach, let us know why and how useful you found it in the comments. 

Kaaleekaa Agravat
User researcher  

12 things we learnt about creating effective surveys

At Co-op Digital we sometimes use surveys to get (mostly) quantitative feedback from users. They’re quick, cheap and they’re a useful research technique to capture data at scale.

But they can also be a waste of time and effort if we do not ask the right questions in a way that will give us meaningful answers.

We’ve compiled a list of things we keep in mind when we’re creating surveys.

Strive for meaningful data

1. Be clear on the purpose of the survey

We consider what we want to be able to do as a result of the survey. For example, when we’ve collated the responses, we want to be able to make a decision about doing (and sometimes *not* doing) something. To create an effective survey, we must know how we’ll act on each of the things we learn from it.

We give our survey a goal and consider what we want to know, and who we need to ask, then draft questions that help us achieve that goal.

2. Make sure each question supports the survey’s purpose

We keep in mind what we want to do with the survey responses, and make sure each question we ask is relevant and presented in a way that will return meaningful data from participants. If we can’t explain how the data we’ll get back will help us, we don’t ask that question.

We know that the more questions we ask, the more time we ask of our users, and the more likely they will be to drop out.

3. Check a survey is the most appropriate format

It can be tempting to cram in as many questions as possible because it’ll mean we get lots of data back. But quantity doesn’t translate to quality. Consider the times you’ve rushed through a long survey and justified giving inaccurate or meaningless answers just to get through it quickly. When we find ourselves wanting to ask lots of questions – especially ones with free text boxes – a survey isn’t the most appropriate format to gather feedback. An interview might be.

Consider what we’re asking and how we’re asking for it

4. Use free text boxes carefully

Free text boxes which allow people to type a response in their own words can be invaluable in helping us learn about the language that users naturally use around our subject matter.

But they can also be intimidating. The lack of structure means people can get worried about their grammar and how to compose what they’re saying, especially if they have low literacy or certain cognitive conditions. For many people they can be time-consuming, and so can make drop out more likely.

If we use free text boxes, we make them optional where possible. It can also increase completion rates if they’re positioned at the end of the survey – participants may be more invested and so more likely to complete them if they know they’re near the end.

5. One question at a time

Be considerate when you ask questions. To reduce the cognitive load on participants, reduce distraction and ask one question at a time. Ask it once. Clearly. And collect the answer.

Questions like ‘How do you use ‘X’, what’s good about it, what’s bad?’ are overwhelming. And then giving a single box to collect all 3 answers, often end up collecting incomplete answers. A participant will quite often answer only one of those 3 questions.

6. Ask questions that relate to current behaviour

People are not a good judge of their future actions so we don’t ask how they will behave in future. It’s easy for a participant to have good intentions about how they will, or would, behave or react to something but their answer may be an unintentionally inaccurate representation. Instead, we ask about how people have behaved, because it gives us more accurate, useful and actionable insights. “The best predictor of future behaviour is past behaviour,” as the saying goes.

7. If we ask for personal information, we explain why

Participants are often asked for their gender, sex, age or location in surveys but often nothing will be done with that data. If there’s no reason to ask for it, we don’t.

When there is a valid reason to ask for personal information, we explain why we’re asking.

For example, in the Co-op Health app we ask for an email address so that we can send essential service updates. Without explaining why we were asking for it, many people were reluctant to give their email because they thought they were going to get spam. By explaining the reason we were asking, and how the information will be used, the user was able to decide whether they wanted to proceed.

Explaining why we’re asking for personal information is essential in creating transparent and open dialogue. It gives users the context they need to make informed decisions.

8. Avoid bias. Show the full picture

Give participants context. For example, an online survey might reveal a snippet of text for a limited amount of time in order to find out how well participants retained the information. If the survey results say that 90% of people retained the information, that’s great but it doesn’t necessarily mean that was conclusively the best way to present the information – that’s only one of the possible ways of presenting the text. In these cases it’s better to do a multivariant test and use multiple examples to really validate our choices.

Be inclusive, be considerate

9. Avoid time estimates

Many surveys give an indication of how long the survey will take to complete. Setting expectations seems helpful but it’s often not for those who with poor vision, dyslexia or English as a second language. It also rarely takes into account people wo are stressed, distracted or are emotionally affected by the subject matter. Instead, we tend to be more objective when setting expectations and say how many questions there are.

10. Don’t tell participants how to feel

A team working on a service will often unthinkingly describe their service as being ‘quick’, ‘easy’, ‘convenient’ or similar. However, these terms are subjective and may not be how our users experience the service. We should be aware of our bias when we draft survey questions. So not, ‘how easy was it to use this service?’, which suggests that the service was easy to begin with, but ‘tell us about your experience using this service’.

11. Consider what people might be going through

Often, seemingly straight-forward questions can have emotional triggers.

Asking questions about family members, relationships or personal circumstances can be difficult if the user is in a complex or non-traditional situation. If someone is recently separated, bereaved or going through hardship, they could also be distressing.

If we have to ask for personal information, we consider circumstances and struggles that could make answering this difficult for people. We try to include the context that these people need to answer the question as easily as possible.

12. Give participants a choice about following up

Sometimes survey answers will be particularly interesting and we may not get all the information we want. At the end of the survey, we ask participants if they’d be happy to talk to us in the future.

We also give people a choice about how we follow up with them. Some people may be uncomfortable using a phone, some may struggle to meet you face to face, some may not be confident using certain technologies. Ask the user how they want us to contact them – it’s respectful, inclusive and is more likely to encourage a positive response.

When choosing who to follow up with, avoid participants that were either extremely positive or negative – they’ll can skew your data.

Time is precious – keep that in mind

At the end of the day, when people fill out a survey, they feel something about your brand, organisation or cause. They may like you or they may just want their complaint heard. Sometimes, they’re filling out your survey because they’re being compensated. Whatever the reason, view it as them doing you a favour and be respectful of their circumstance and time.

If you’ve got any tips to share, leave a comment.

Joanne Schofield, Lead content designer
Tom Walker, Lead user researcher

How contextual research helped us redesign the replenishing process in our Food stores

Every day, in every Co-op Food store, a colleague does a ‘gap scan’. They walk around the store, they spot gaps on the shelves, and they scan the shelf label with a hand-held terminal. This generates a ‘gap report’ which tells the colleague which products need replenishing. It also flags other tasks, such as which items need taking off the shelves because they should no longer be sold.

This is an essential stock management process in our stores. It ensures:

  • stock we’re low on is ordered automatically
  • customers can get the products they need
  • our stock data is accurate

However, the process is complicated. There’s an 18-page user manual explaining how to do it and on average, gap reports are 25 pages long. 

Making the essential less arduous

In the Operational Innovation Store team, we aim to simplify laborious processes in stores. Product owner and former store manager Ross Milner began thinking about how we might tackle ‘gap’, as store colleagues call it. 

He started by asking some questions:

  • How might we design a process so intuitive our store colleagues don’t need a manual? 
  • How might we help colleagues complete all the priority actions from the report immediately? 
  • How might we save 25 pieces of paper per store, per day – in other words, 22 million sheets per year? 

Learning from users

I’m a user researcher and this is the point where I joined the project. My first research objective was to discover how store colleagues go about the process at the moment, and what they find good and bad about it. To do this, I visited 5 stores. I interviewed the managers about their process – as it’s a task which usually falls to them due to its current complexity – but most importantly, I observed how they use the gap reports.

Adapting what they had to meet their needs

Being there in person in the back offices in stores gave me a far deeper insight than I would have got had I done phone interviews, or even just spoken to colleagues on the shop floor. 

Being there gave me access to reams of old gap reports stashed in the back office. It was invaluable to see how colleagues had adapted them to better meet their needs. Some of the things I saw included:

  • dividing the stack of pages into easily-managed sections
  • highlighting the information that requires action
  • ignoring all the non-actionable information on the report – some users didn’t even know what the information meant
  • changing printer settings to save paper
  • ticking off products as they complete the actions against them 

Photograph of one page of a gap report. Several numbers are highlighted. Not particularly easy to understand.

Seeing the physical artefact in its context revealed a lot of needs we might have otherwise missed, because colleagues are doing these things subconsciously and most likely wouldn’t have thought to mention them to us.

Learning from prototypes

Our contextual research has helped us identify several unmet needs. Delivery manager Lee Connolly built a basic prototype in Sketch and we mocked up a digitised gap reporting process. The design clearly separated and prioritised anything that needed store colleagues to take action. We arranged those tasks in a list so they could be ‘ticked off’ in the moment, on the shop floor.

Screenshot of an early prototype used for scanning labels on shelves

This was intended as a talking point in user interviews and the feedback was positive. The store managers were fascinated, asking when they’d be able to use it, and – unprompted – listing all the benefits we were hoping to achieve, and more.

Developing ‘Replen’: an alpha

We’d validated some assumptions and with increased confidence in the idea, we expanded our team to include a designer and developer so we could build an alpha version of the app. We call this app ‘Replen’ because its aim is to help colleagues replenish products when needed.

Interaction designer Charles Burdett began rapid prototyping and usability testing to fail fast, learn quickly and improve confidence in the interface. It was important to do this in the store alongside colleagues, on the devices they normally use. We wanted to make it feel as realistic as possible so users could imagine how it would work as a whole process and we could elicit a natural response from them. 

photograph of possible interface on a phone in front of co-op food store shelves

Profiling stores so we know where we’re starting from

Before we could give them the app, we needed to understand each trial store’s current situation, so that we’ll be able to understand how much of a difference Replen has made. We visited all the stores we’re including in our trial. Again, being physically there, in context, was vital. 

The following things have an effect on the current gap process and may also affect how useful Replen is for colleagues. We noted:

  • the store layout and the size of their warehouse
  • whether the store tends to print double-sided
  • where managers had created their own posters and guides to help colleagues follow the gap process
  • any workarounds the stores are doing to save time and effort

Screen Shot 2019-07-01 at 16.25.04

What’s next for Replen?

We’ve just launched the Replen alpha in our 12 trial stores.

The aim of an alpha is to learn. We’re excited to see whether it meets user needs, and validate some of the benefits we’ve been talking about. We’re also keen to see whether stores continue using any workarounds, and whether cognitive load is reduced.

We will, of course, be learning this by visiting the stores in person, observing our product being used in real life, and speaking to our users face to face. When redesigning a process, user research in context is everything. 

Rachel Hand
User researcher

Field research: designing pre-paid plans with Funeralcare

This week, the design team held a show and tell to discuss 2 questions:

  1. What is design?
  2. Why should you care?

If you couldn’t make it, we’re writing up some of the examples from different areas of design that we talked about and we’re posting them on the blog this week. They’re aimed at Co-op colleagues whose expertise are in something other than a digital discipline.

Today we’re looking at how we used field research when we were designing a digital form with Funeralcare colleagues who arrange pre-paid funeral plans in our branches. (You can also make a pre-paid funeral plan online).

Buying a pre-paid funeral plan: how the paper forms process works

Here’s how the process tends to work:

  • a client rings a local branch to make an appointment
  • the client goes into a branch
  • a colleague and the client fill out a lengthy paper form together
  • the client pays at least a deposit to their local branch
  • 3 copies of the paper form are made – one for the client, one is kept in branch and the other is sent by post to head office which often takes 7 days
  • a colleague at head office manually copies the information from the paper form into a customer relationship management system
  • the form is dug out on the request of the client’s family after their loved one has died

The process is expensive, time-consuming and as with all human processes, there is room for error.

What we wanted to achieve

We wanted to create a more efficient, easy-to-use service. We wanted to connect the computer systems that are already being used in Co-op Funeralcare branches and integrate them directly with the customer relationship management system colleagues use in head office.

Where to start?

What we knew was limited. We had an idea what the start of the process was for clients and colleagues because we knew what the paper form looked like. We also had sales data from the very end of the process. But in order to improve efficiency and ease of use, we needed to know a lot more about how things are working in between these 2 points.

For both colleagues and clients we wanted to get a clearer picture of:

  • what a plan-making appointment was like (both practically and emotionally)
  • the paper form filling process
  • whether there were frustrations with the process and where they were

We arranged some site visits for our ‘field research’.

Learning from field research

We visited Co-op Funeralcare branches.

Green image with white copy that says: The approach. Get out of the office to learn and test

Why? Because when people feel at ease they’re more likely to open up and speak honestly. For this reason we spoke to our funeral arranger colleagues in a context they’re familiar with – in the rooms where they regularly create plans with clients. Talking to them here helped them relax, and because they weren’t in a place where their seniors might overhear, they were less guarded than they might be if we brought them into head office.

Seeing mistakes happen, figuring out why they happen

Talking to them was good but seeing colleagues fill out the paper plans was invaluable because we could observe:

  • the order they approached the questions
  • whether they made mistakes and where
  • if and where they used any common work-arounds where the form didn’t meet their needs

All of this helps us see where we can improve the design.

Feeding observations into the design

When we were talking through the paper forms with arrangers, they told us they often found there wasn’t enough space to capture a client’s personal requests. Because they’d come up with a reasonable work-around, it might not have been something they would have mentioned to us if we hadn’t been there, in their office, looking at the forms together. Being there helped us make sure we didn’t miss this. They showed us examples of when they had worked around a lack of space by attaching an extra sheet to the paper form they were submitting.

In the example below the client has requested to be dressed in ‘Everton blue gown with frill’ and they’ve been very particular about the music before, during and after the service.

Every funeral is different – just like every life they commemorate and the paper form didn’t accommodate for the level of detail needed. The work-around they’d come up with wasn’t hugely painful but good design is making processes pain free. We fed our observations back to the digital team and designed a form that allowed for individuality. It has bigger open text boxes to record more detail as well as including drop downs and free text boxes for music on the day.

Paper versus digital forms

The benefits of moving across to digital forms include:

  1. Having easier access to more data, for example, numbers on couples buying together and numbers on people buying for someone else. This is useful because we can direct our efforts into improving the experience where the most people need it. 
  2. Saving time for colleagues who manually copy paper plans to the head office system. Digital plans are sent directly to system and are instantly visible to colleagues in head office.
  3. Reducing the number of errors in paper plans. Common mistakes include allowing people over 80 to spread their payment over instalments and the client’s choice of cremation or burial not being recorded. The design of the digital form doesn’t allow arrangers to progress if there are mistakes like these.
  4. A significant yearly saving on stamps used to send paper forms from a branch to head office.

Field research helped get us to this point

We’re now testing the new digital forms in 15 branches. We’ll be rolling them out to more and more branches over time but we’re starting small so we can iron out any cracks.

So far, the feedback from colleagues is positive. But without observing colleagues in context, there’s a certain amount of assumption about the way they work on our part. Field research contributes to the fact the pre-paid funeral plan service is design-led.

If everyone shares an understanding of the benefits of being design-led, it’ll be easier for experts from around the business to work together to deliver value to Co-op customers, colleagues and the Co-op as a business. If you didn’t make the show and tell but would like to find out more, email Katherine Wastell, Head of Design.

Gillian MacDonald
User researcher

How we turn research into actionable insights

One of the main challenges for us as researchers is making our findings more actionable for the rest of the team, particularly during the discovery phases when we’re conducting exploratory research.

At least initially, early stage research can bring more ambiguity than clarity, throw up more questions than answers and we often end up with challenges and problems that are too broad to solve.

As researchers, it’s our responsibility to use research methods that will facilitate good design and product decisions. It’s not enough to just do the research, we need to help translate what we’ve learnt for the rest of the team so that it’s useful.  

How we did it

We’re working on a commercial service. Our team’s remit was to find out what would make our service different because, in theory, if we can solve unmet customer needs, we can compete in a saturated market. A successful product or service is one that is viable, feasible and desirable.

This post covers 3 techniques we’ve recently tried. Each one helped us reduce ambiguity, achieve a clearer product direction and get a better understanding of our users, their behaviours and motivations.

1.Learning from extremes

When we’re testing for usability or we’re seeing how well a functional journey works, we usually show users a single, high fidelity prototype. However, earlier on in the design process, we put very different ideas in front of users so we can elicit a stronger reaction from them. If we only showed one idea at that point, their reaction is likely to be lukewarm. It’s when we elicit joy, hatred, confusion for example that we learn a lot more about what they need from a product.

In this instance, we wanted to uncover insight that would help us define what might make a more compelling product.

We identified the following problem areas:

  1. Time – people don’t have much of it.
  2. Choice – there is so much.
  3. Inspiration – people struggle with it.

Instead of prototyping something that would attempt to improve all 3 of these problem areas as we would do when testing usability, we mocked up 3 very different prototypes – each one addressed just one of the problems.

The extreme prototypes helped users better articulate what meets their needs and what might work in different contexts. It wasn’t a case of figuring out which version was ‘best’. We used this technique to test each idea so we could find out which elements work and therefore include them in the next iteration. It also started informing the features that the experience would be comprised of.

Overall though, it helped us reach a clear product direction which gave us a steer in our next stage of research.

2.Doing a diary study

A diary study is a great way to understand motivations and uncover patterns of behaviour over a period of time. We recently invited a bunch of urban shoppers to keep a diary of how they were deciding what to eat at home.

We asked them to use Whatsapp, partly because it was something they already used regularly but also because its quick, instant messages reflect the relatively quick amount of time it takes for someone to make a decision about what to eat. The decision is not like choosing which house to buy where you might think about and record decisions carefully in spreadsheets, so it would be difficult for people to reflect on their ‘what to eat’ decisions retrospectively in interviews. Whatsapp was a way to get closer to how choices are made so we could better understand the context, behaviour and decision itself.

The engagement was much higher than we expected. We captured lots of rich data including diary entries in text, video and photo format. We didn’t ask for or expect the visuals but they were very useful in bringing the contexts to life for our stakeholders.

When we looked for patterns in the data, we found that nobody behaved in the same way every day, or over time. However, we were able to identify ways people make choices. We called them ‘decision making modes’. We looked at the context in which people made decisions and the behaviour we’ve observed. Each mode highlighted different pain points, for example, they may have leftovers to use up. This enables us to prioritise certain modes over others, get alignment as a team on who we’re solving problems for, and think about features to help address some of the pain points for users.

3.Using sacrificial concepts

‘Sacrificial concepts’, a method developed by design company IDEO, allow us to gain insight into users’ beliefs and behaviour. We start with reframing our research insights as ‘How might we…?’ questions that help us find opportunities for the next stage of the design process.

For example, we found that buying groceries online feels like a big effort and a chore for shoppers because of the number of decisions involved. So we asked: “How might we reduce the number of decisions that people need to make when they shop online?”

We did this as a team and we then create low fidelity sketches or ‘concepts’ that we’re willing to sacrifice that we can put in front of users.

Just like when we test extremes, the purpose of testing those ideas wasn’t to find a ‘winning version’ – it was to provoke conversation and have a less rigid interview.

Sacrificial concepts are a fast and cheap way to test ideas. No-one is too invested in them and they allow us to get users’ reaction to the gist of the idea as opposed to the interface. They give us a clearer direction on how to address a problem that users are facing and they are a good way to make research findings more usable in the design process.

What’s worked for you?

Those are the 3 main ways we’ve approached research in the early phase of one particular commercial Co-op service. We’d like to hear how other researcher and digital teams do it and their experiences with the techniques we’ve talked about.

Eva Petrova
Principal user researcher