At a time when Google AI can beat the world’s-leading player of Go, the popular Chinese board game, and IBM’s Watson computer can recommend the same cancer treatment as doctors in 99% of cases, how long is it before we’re waiting to see Wall-E in a lab coat?

It sounds like a scene from a science fiction movie – a robot doctor computing what’s wrong, a robot nurse dispensing your pills, and a robot surgeon making the incision – but while many AI sceptics are quick to point out that we’re a long way off from this scenario, recent technological leaps have made it clear that we’ve moved on from dumb decision making machines. As a healthcare communications agency we wanted to explore this topic further.

All fun and games

So far, the most notable applications of Artificial Intelligence have been for entertainment purposes. When Google’s clever computer defeated the world champion at Go, it hadn’t ‘simply’ learned all the potential moves (Go has an almost impossible number of game permutations); it had learned how to spot patterns on the board. It knew what looked good, mirroring the strategy used by the best human players.

Even more recently, the updated Google translate tool, which is based on neural networks, surprised its creators by spontaneously inventing its own language to help solve translation challenges.

Elementary, my dear Watson

Within the healthcare industry, machines have been proven to be better at detecting skin cancer than humans, and robots are already helping to make surgeon’s cuts more accurate. IBM’s Watson health engine promises “a new era of cognitive health”. So far, its super processing power is being used for genomics and cancer research. In a study at the University of North Caroline School of Medicine, Watson demonstrated that it was able to recommend the same cancer treatment as human doctors in 99% of cases. What’s more, thanks to its ability to take into account all the available research and trials, Watson was able to find additional treatment options in 30% of cases.

Considering that, By 2020 doctors will face 200x the medical data than a human could possibly process”, Watson could be the essential sidekick to every GP in the country. How long before health insurers consider it too risky for your physician NOT to consult a data engine such as Watson?

IBM Watson

Knowing and understanding are two different things

Of course, physicians bring far more to the table than their ability to recall information about research and trials. They are able to foster relationships with their patients and can even tell when something isn’t quite right from just a glance.

AI can’t replace the human element, the sense, the feel, the intuition, that is essential in medicine

A machine relies on the evidence available – it doesn’t know anything until the tests have been completed – and if that data is poor, then the patient will get a poor diagnosis and poor treatment. Humans can interpret a fellow human’s level of pain and discomfort, which will vary from patient to patient. Doctors can use their human understanding and intuition to see past the data to uncover what’s important.

Or can they?

Humans are not set up to be logical, empirical decision makers, but only to feel that we are. In fact, we use “heuristic” shortcuts all the time, which is abundantly evident in research about the kinds of biases that creep into medical decision-making. These include overconfidence, the anchoring effect, information and availability bias and tolerance to risk, which may lead to the wrong diagnosis, ineffective or even harmful treatment. The impact of which is huge. “Human doctors make errors simply because they are human, with an estimated 400,000 deaths associated with preventable harm in the US per year.

Prevalence of cognitive biases in the top three most comprehensive studies

Prevalence of cognitive biases in the top three most comprehensive studies. Numbers represent percentages reflecting the frequency of the cognitive bias. From

Are we ready for a revolution?

Even if technology could improve diagnosis, ensure that you received the most appropriate treatment and make surgery more accurate; we still haven’t got past the fact that the idea of AI robots replacing doctors is a bit creepy.

However, a quick straw poll in the Create Health office showed that 10 in 12 of those who use an AI assistant in everyday life, such as Siri or Echo, converse with the device beyond basic commands. For example, saying “please”, “thank you” or asking how the computer is feeling today.

This might be our geeky love for technology shining through, but it does seem to indicate that even basic technologies can be imbued with personality, simply because they talk and are not predictable in their responses. It’s certainly true that for many people, smartphones have become an extension of self, augmenting their human capabilities.

Recently, USC Viterbi’s Louis-Philippe Morency created Ellie: the therapist avatar. Ellie was designed to treat PTSD by analysing over 60 factors, including body language and tone. Some patients have actually found Ellie more helpful than the real thing, while others didn’t even realise that Ellie wasn’t human. It seems it’s entirely possible for a human to feel some level of connection with a robot.

On the other hand, no matter how much of a “relationship” you are able to establish with a machine, it seems unlikely that they will ever respond to human emotions in the way a person would. For example, this article refers to a woman suffering from terminal cancer, “She took huge comfort from her doctor saying to her, “You’re still the same woman I met 16 months ago; you’re still exactly ‘yourself.’” She repeated this a lot. It must have meant a great deal.” It’s doubtful that a machine could have detected this underlying source of distress and offered the same comfort. The future of robot doctors, therefore, may depend on how much we need to feel that they care as much as we do.

The imitation game

This evolution from an entirely human-based healthcare system to an increasingly automated system that enhances human judgment will take time, and there are many ways in which it can happen.

When Turing wrote about artificial intelligence, he said the question was about “the imitation game”, where machines “pretended” to be human. But nowadays, we have a different understanding of the future of AI. As Stephen Pinker, Prof of Psychology from Harvard, wrote recently: “Just as inventing the car did not involve duplicating the horse, developing an AI system…won’t require duplicating a specimen of Homo Sapiens”

It’s perhaps more likely, at least in the imaginable future, that technology will be used more and more to augment, rather than replace physicians’ skill, experience, knowledge and care. It’s likely that professions such as radiology, pathology, ophthalmology – all of which rely to a large extent on complex imaging and algorithms – will be significantly impacted.


The robot will see you now

Thirty years ago, we couldn’t have imagined that we’d have the smartphones of today, so it’s almost impossible to predict where we’ll be 30 years from now.

Would you want your doctor to ignore empirical evidence about alternative medicines from IBM Watson? Would you feel comfortable putting your life in the hands of a robot surgeon? Will robots ultimately take the care out of healthcare? These are the questions we will need to answer going forward – perhaps sooner than we expected. For more articles such as this go to our On Our Minds blog page