How a voice user interface could help our Funeralcare colleagues

Sometimes in organisations – and especially in digital teams – we start a piece of work but for various reasons we don’t roll it out. The work we’re talking about in this post is an example of this and although it looked very much like it had potential to meet our colleagues’ needs, we’re taking a break from it. The work helped us learn what a complex area we were dealing with and how very important it would be to get this absolutely right.  

We may revisit the work in the future. For now, we’re sharing the valuable insights we got from it. 

Co-op Guardian uses Amazon Web Services (AWS) and in August 2019, as part of Amazon’s consultancy package, we decided to explore voice interfaces. We wanted to find out if  Amazon Alexa – their virtual assistant AI (artificial intelligence) – could help us solve a problem on one of our projects. We worked together to see how we could use AI to help our Funeralcare colleagues who embalm the deceased.

This post is about what we did and what we learnt, as well as the problems a voice user interface might fix, and the problems over-reliance on – or careless use of – one might create.

About the embalming process

Some of our Co-op Funeralcare colleagues ‘embalm’ the deceased. Embalming is the process of preparing the deceased by using chemicals to prevent decomposition as well as making sure they look suitable for the funeral or a visit at the funeral home. Many friends and family members feel that seeing their loved one looking restful and dignified brings them peace and helps with the grieving process.

What’s not so great right now

floorplanAt the moment, our embalmers have tablets with their notes and instructions about how to present the deceased. They refer to them throughout the process. But colleagues tell us there are problems with this, for example:

  1. Tablet screens are small and not easy to see from a distance.
  2. Although they’re portable, positioning tablets conveniently close to the embalming table is tricky, and the charging points aren’t always close by.
  3. Wifi can be spotty because embalming suites sometimes have thick walls and ceilings, plus extra insulation to help with careful temperature control.

Perhaps the biggest problem however comes when colleagues need to double check instructions or details and the tablet has timed out. They need to remove their gloves, sign back into the tablet, find the information and replace their gloves. Recent government guidance, plus an internal review, suggests hands-free devices are a good way to avoid unnecessary contact.

Could Alexa help? We had a hunch that she could. Here’s what we did.

Captured possible conversations and created a script

As a starting point, we used what we’d already seen happen in embalming suites during our work on Guardian. We thought about what an embalmer’s thought process might be – what questions they may need to ask and in which order. Based on that, we drafted a script for the sorts of information Alexa might need to be able to give.

photograph of post its up on a wall depicting what alexa and the embalmer might say

But language is complex. There are many nuances. And an understanding of users’ natural language is important to be able to help Alexa win their confidence and 2. accurately identify (‘listen to’) questions and respond.

Turning written words into spoken ones

We pre-loaded questions and responses we knew were integral to an embalming onto a HTML soundboard using Amazon Polly, which can recreate an Alexa-like voice. At this early stage of testing it was better to use the soundboard than to spend time and energy programming Alexa.

laptop_alexa_embalmerWe:

  1. Wrote the content peppered with over-enthusiastic grammar which we knew would prompt Polly to emphasise and give space to important information. For example, “We’re ready to go. From here, you can. ‘Start an embalming’. ‘Review a case’. Or. ‘Ask me what I can do’.
  2. Connected our laptop to an Echo speaker using bluetooth.
  3. Turned the mic off on the Alexa. Told participants that she was in dev mode and asked them to speak as they normally would.
  4. Responded to what they said to Alexa by playing a relevant clip from Polly.

This is a great way of learning because it allowed us to go off script and means we didn’t have to anticipate every interaction.

Over time we’d learn what people actually say rather than second-guessing what they would say. We’d then add the wealth of language to Alexa that would allow for nuance.

Research run-through

One of the reasons for doing this piece of work was to see if we could give time back to embalmers. With this in mind, we did a dummy run with ‘Brenda’ in the photograph below. It helped us to pre-empt and iron out problems with the prototype before putting it in front of them. Fixing the obvious problems meant we could get into the nitty-gritty details in the real thing.

photograph of 'brenda' a outline of a person drawn onto a huge sheet of paper placed on the table for the research dummy run.

During research, we were manually pushing buttons on the soundboard in response to the participants’ conversation (although the embalmers thought the responses were coming from Alexa).

High-level takeaways from the research

Four weeks after we began work, we took our prototype to Co-op Funeralcare Warrington and spent half a day with an embalmer. We found:

  1. The embalmer didn’t have to take her gloves off during the test (cuppa aside ☕).
  2. For the 2 relatively short, straightforward cases we observed with the same embalmer, the voice user interface was both usable and useful. That said, the process can take anywhere from 30 minutes to 3 hours and more complicated or lengthy cases may throw up problems.
  3. The embalmer expected the voice assistant to be able to interpret more than it can at the moment. For example, she asked: “Should the deceased be clean-shaven?” But the information she needed was more complex than “yes” or “no” and instructions had been inputted into a free text box. Research across most projects suggests that if someone can’t get the info they want, they’ll assume the product isn’t fit to give any information at all.

The feedback was positive – yes, early indications showed we were meeting a need.

What we can learn from looking at users’ language

When someone dies, family members tell funeral arrangers how they’d like the deceased to be presented and dressed for the funeral and any visits beforehand. Colleagues fill in ‘special instructions’ – a free text box – in their internal Guardian service.

We looked at the instructions entered in this box across the Guardian service. Our analysis of them drew out 3 interesting areas to consider if we take the piece of work forward.

  1. User-centred language – Rather than collecting data in a structured ‘choose one of the following options’ kind of way, the free text box helps us get a better understanding of the language embalmers naturally use. Although we don’t write the way we speak, we can pick up commonly-used vocabulary. This would help if we wrote dialogue for Alexa.
  2. Common requests – After clothing requests, the data shows that instructions on shaving are the most frequently talked about. Hair can continue to grow after death so embalmers will by default shave the deceased. However, if the deceased had a moustache, embalmers need to know that so they tidy it rather than shave it off. It could be hugely upsetting for the family if the deceased was presented in an unrecognisable way. With this in mind, it would be essential that the AI could accurately pick out these details and make the embalmer aware.
  3. Typical word count – Whilst the majority of instructions were short (mostly between 1 to 5 words) a significant amount were between 35 and 200 words which could become painful to listen to. There would be work around how to accurately collect detailed instructions, in a way that made playing them back concise.

AI can make interactions more convenient

Everything we found at this early stage suggests that designing a voice user interface could make things more convenient for colleagues and further prevent unnecessary contact.

However, because it’s early days, there are lots of unknowns. What happens if multiple people are in the embalming suite and it’s pretty noisy? How do we make sure our designs cater for differing laws in Scotland? When we know the ideal conditions for voice recognition are rarely the same as real life, how do we ensure it works under critical and sensitive conditions?

They’re just for starters.

With a process as serious and sensitive as embalming there’s no such thing as a ‘small’ mistake because any inaccuracies could be devastatingly upsetting to someone already going through a difficult time. Sure, Alexa is clever and there’s so much potential here but there’s a lot more we’d need to know, work on, fix, so that we could make the artificial intelligence part of the product more intelligent.

Tom Walker, Lead user researcher
Jamie Kane, user researcher

Illustrations by Maisie Platts