Voice assistant technologies like Google Assistant, Amazon Alexa and Apple Siri are beginning to have a major impact on people’s daily lives with over 20% of households in the UK now owning a smart speaker.
So far, the focus has been on voice tech’s ability to hail cabs, order pizza, select entertainment or provide weather updates. But it’s increasingly apparent that voice can play a critical role in helping people manage chronic health conditions from home.
Studies show improvements in clinical outcomes for patients who use voice-activated tech.
Arguably, this capability is even more important now that Covid-19 is stretching healthcare organisations and frontline workers beyond their limits.
In the UK, on top of mounting concerns about the lack of tests and PPE equipment, 25% of calls to NHS 111 are reportedly being abandoned as worried consumers fail to get through.
Once the immediate crisis is over, investment in the NHS will be right at the top of the electorate's concerns. Tech, especially data and AI, will have a key role to play in ensuring healthcare provision is fit for purpose in the event of any future outbreaks.
One development I predict post Covid-19 is that lockdown and social distancing are likely to accelerate the move towards remote or ‘self-service’ healthcare delivery.
As a very natural interface, voice tech will be a core component of this self-service model. Below, I’ve identified three key ways that voice assistance can help bridge the gap between patient and healthcare provider, helping to improve personal health outcomes and reducing financial strain on healthcare systems.
As well as exploring the role of voice in a post Covid-19 world, each section looks at user experience design – exploring the challenges of integrating this transformational tech in ways that help and inform, and most importantly engage patients.
Voice assistant tools allow healthcare professionals to remotely monitor and assess the way a patient is coping with their chronic condition, identifying patterns that may otherwise be overlooked.
San Francisco startup Sense.ly’s virtual assistant Molly is just one example of this fast-moving trend that uses voice tech and AI to help patients and their health practitioners manage illnesses such as diabetes, heart conditions and mental disorders.
As we move towards a more distributed, self-service style of healthcare provision, voice assistance has the potential to be of enormous value in relieving pressure on health services especially when it comes to providing continuity of care for patients with chronic conditions.
For example, regular remote assessment surveys can detect health conditions in patients before they fully emerge.
Vocal biomarkers, like those being tracked by Sonde Health, could be integrated with smart speakers to monitor tone, rhythm, pitch, volume and repetition of words, as well as changes in pronunciation or difficulties in speaking that might be new.
This could help identify symptoms and speed up diagnosis of conditions such as Parkinson’s, depression and heart disease.
With forethought, voice-powered health assessments could be designed to have a quantitative dimension. Patients could be asked to rate their level of pain daily out of 10, or they could collate key readings such as blood pressure or blood glucose levels - allowing healthcare practitioners to focus their time on the highest value activities - diagnosis and treatment.
Going forward there is the option for technologies such as machine learning to enhance this capability but it must always be done with patient experience front of mind.
Outside monitoring and assessment, voice has the potential to alleviate systemic strain by reminding people with chronic conditions to take their medication.
Clearly this is particularly valuable for patients who are more prone to forget – but anyone who is busy or distracted can lose track of when to take tablets, especially at a time when normal routines are disrupted by lockdown. Voice can also prompt people to reorder medication.
Design challenges to consider:
Voice may be a natural interface, but it involves key design considerations. For a start, patients are used to going to see a human healthcare practitioner, telling them the problem and being told what to do.
But voice shifts healthcare towards a self-service model. This creates an extra responsibility to ensure only good, trusted data gets through as well as thinking about how it is presented.
The content strategy for a tech-savvy millennial is not the same as that for an elderly patient who is used to a more human, personal interaction.
When it comes to vulnerable groups such elderly or infirm patients who are home alone, ‘self-service’ also needs to consider spatial design, including how a patient moves around their home and the available connectivity to wifi and to other medical alarms.
For example, if a voice user interface is being used to help prevent elderly patients from falling and to summon help if they do, spatial design needs to factor in the need for constant proximity and connectivity between the patient’s wrist or neck alarm and the voice device.
More broadly, a key design requirement for voice is to ensure the voice assistant isn’t too overbearing or too interventionist. Prompts need to be light touch to avoid the risk that patients will disengage entirely.
Similarly, design needs to avoid assumptions that the patient is able to communicate as articulately as the healthcare provider.
One way of preventing voice from seeming like the enemy is by designing an ecosystem of support that also involves family members so that, for example, other people beside the patient can interact with the voice assistant.
Most people trust their healthcare provider to select the right treatment.
But time-pressured appointment slots can leave patients with unanswered questions about side effects - including interaction with other drugs, should medication be taken after food, whether it’s safe to drive and so on. Voice assistance can be on hand to instantly tackle such questions.
Taking this a stage further, voice offers an accessible way to overcome barriers to inclusion.
For patients with sight loss or other visual impairments, a voice assistant makes it much easier to obtain and follow instructions on how to take a medicine.
Meanwhile for patients experiencing Alzheimers, voice assistants have a key role to play in setting alarms for tasks such as doctors appointments or reminding a patient where in the house their medication is stored.
With the necessary compliance requirements factored in, voice can also give pharma companies an authentic reason to communicate directly with patients.
Instead of phoning the doctor’s surgery to ask whether it’s safe to compensate for a forgotten dose by double dosing, this kind of question could easily be fielded by a pharma company’s voice activated chat bot for example.
Design challenges to consider:
Voice interfaces will only be effective if they are designed with sensitivity.
Typically, designers focus on the need to map out decision trees and conversational workflows when thinking about voice and how it can be used with patients.
But more work needs go into considering the human angle - how does the voice personality present itself? Is the patient a member of the Silent Generation – who expects healthcare providers (and by extension voice) to be authority figures? Or are they millennials who are more likely to question everything?
If the patient has only just been diagnosed with a difficult life changing illness, it’s possible they will not respond as well to voice as someone who has a more routine relationship with their condition. In this scenario, patients with terminal conditions will need clear shortcuts to connect with human support.
Voice is already starting to play a role in alleviating the pressure on healthcare ecosystems. The NHS in the UK, for example, is implementing Amazon Alexa to free up frontline resources.
Even once the current Covid-19 crisis starts to subside, public demand for healthcare services - and the pressure on health organisations - is likely to remain high.
Voice tech is well placed to take some of the strain by automating and streamlining time consuming processes. Using simple conversations and guided decisions, voice tech can enable surgeries and hospitals to interact with large numbers of patients without gathering them in a waiting room or requiring staff to manage advice lines.
As a result, we can expect solutions such as voice-enabled triage systems to accelerate in the near future.
Voice also has a key role to play in automating administrative roles - for example reducing the amount of time healthcare professionals spend on taking notes. Looking ahead, voice-enabled applications such as automated record keeping app Kiroku - currently aimed at dentists -are likely to be rolled out to other areas of healthcare.
Design challenges to factor in:
The real key to maximising voice tech’s potential will be to ensure data is being captured and turned into insights in a usable way. The more actionable data that voice can accumulate, the less time pressure on healthcare professionals to be collating their own notes.
For pharma firms considering voice applications, it’s important to integrate their offering into the overall architecture of health support and advice. To drive take-up, they need to enhance and empower healthcare professionals – not replace them.
More broadly, whether a voice tech application is delivered by the NHS, a pharma company or health tech business, it’s important to remember that it’s also another channel for reaching consumers.
As such, it’s critical to ensure experience design is front and centre of voice applications, so that it’s an engaging experience for patients which reflects well on the overall brand.
Introducing voice to healthcare presents challenges including data privacy, the battle between proprietary technologies, consumer reluctance to change legacy behaviour, pharma regulation and how voice links to other digital and real world channels.
Equally, it may be tempting to use an exciting new tech like voice - when another channel is more appropriate.
There are, for example, certain sensitive conversations that people wouldn't want to have out loud in public – making typed, chat-based communication more suitable than voice in that context.
It’s equally critical to design healthcare voice solutions with failsafe mechanisms that guard against misinformation. With the best will in the world, there is a danger that the patient might provide information that isn't accurate – leading to incorrect judgement calls.
Similarly, understanding what the patient has not said is as important as analysing what they do say. In order to generate trust, voice needs to develop the capacity to read between the lines in the same way as a healthcare professional does.
Most important is the need for voice in healthcare to take a patient-centric approach. When designing voice tech solutions, it’s easy to prioritise business outcomes, such as reduced call centre volumes or shorter queues in surgeries.
However, designing end to end journeys that function across all channels and emphasise deep understanding of the patient, is key to creating consistent, engaging and valuable voice enabled interactions.
If healthcare providers get the voice user experience right, benefits in terms of patient outcomes, increased operational efficiencies and reduced pressure on healthcare staff and systems will flow.