27 September 2023

Empathetic tech: Could carebots replace human doctors?

Start the conversation

Jeremy Howick* says new research has revealed that positive messages need to be repeated, specific, personalised and delivered by someone in a position of authority to have an impact. He explores what this research means for AI in healthcare.


Research shows that doctors who offer empathic and positive messages can reduce a patient’s pain, improve their recovery after surgery, and lower the amount of morphine they need.

But it doesn’t mean that telling a patient something simple, such as “this drug will make you feel better,” will have an effect.

It’s more complicated than that, as our latest research shows.

Positive messages are usually repeated, definite, specific, and personal.

They should also be communicated by an authority figure who shows empathy.

While our study does not identify what the most effective components of a positive message are (the sample was too small), the results imply that, for example, a positive message that is not specific or personalised and is delivered by a doctor perceived to lack authority and empathy, will not have the desired effect.

What does all this mean for digitally assisted consultations ranging from telephone appointments to “carebots” (artificially intelligent robots delivering healthcare)?

This is an important question to answer since carebots are being proposed as a cost-effective way to deal with the need to keep up with care for the increasing elderly populations in the UK and elsewhere.

The pandemic has accelerated the use of digitally assisted consultations, with the UK health secretary, Matt Hancock, claiming that patients won’t want to go back to face-to-face consultations after the pandemic.

Online consultations are different from carebot consultations, but the trend away from human-to-human interaction can’t be denied.

They have also both been rolled out too fast for ethical frameworks to be developed.

Technical and ethical problems

Adopting the evidence that positive messages help patients in the digital age is both technically and ethically problematic.

While some of the components of a positive message (“this drug will make you feel better soon”) can be straightforwardly delivered through a mobile phone, via a video call, or even by a carebot, it seems to be inherently problematic for others.

For example, the feeling that someone has authority might come from their title (doctor), which is presumably the same whether the doctor is seen in person or over the telephone.

But studies show authority also comes from body language.

It’s more difficult to display body language through a telephone or video.

While carebots may be able to convey their authority – and they have been shown to display sufficiently sophisticated body language to evoke certain emotions – real humans move differently.

Adapting what we know about authoritative messages for the digital age is not straightforward.

Some studies reveal that while digitally assisted consultations do not seem to be harmful, they are different (usually shorter), and we don’t know if they are as effective.

Also, to make the positive message personal to a patient (another component of positive messages) it might be important to pick up on subtle cues such as a downward glance or awkward pause, which studies have shown can be important for making accurate diagnoses.

These cues may be more difficult to read through a telephone call, let alone by a carebot – at least for now.

These are not just technical problems, they are ethical too.

If digitally assisted healthcare consultations are not as effective at delivering positive messages, which, in turn, result in better care, then they threaten to violate the ethical requirement to help patients.

Of course, if a carebot can do things more cheaply or to more people (they might not need to sleep), it might balance things out.

But weighing the different ethical issues needs to be evaluated carefully, and this has not been done.

For carebots, this raises other ethical and even existential issues.

If being empathic and caring is a key component of an effectively delivered positive message, it is important to know whether carebots are capable of caring.

While we know that robots can be perceived as caring and empathic, it is not the same thing as being caring.

It may not matter to some patients whether empathy is feigned or real as long as they benefit, but again, this needs to be fleshed out rather than assumed.

Researchers are aware of these (and other) ethical issues and have called for a framework to regulate the design of carebots.

The study of positive messages shows that the new ethical frameworks would benefit from incorporating the latest evidence about the complexity of effective, positive – and other types of – communication.

At the end of such a serious analysis, it may turn out that digitally assisted healthcare consultations and carebots are as good as face-to-face consultations.

They could even be better in some cases (some people may feel more comfortable telling intimate secrets to a robot than to a human).

What is certain is that they are different, and we currently do not know what the implications of those differences are for optimising the benefits of complex positive messages in healthcare.

*Jeremy Howick is Director of the Oxford Empathy Programme, University of Oxford under a Creative Commons license.

This article first appeared at thenextweb.com

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.