AI is disrupting clinical practice, so how tech is implemented matters
As artificial intelligence continues to find its way into clinical workflows, the reaction of healthcare professionals runs the gamut: Some are dubious, some are overly optimistic, some are surely scared the technology could soon make their jobs obsolete.
Each, in their own way, has a valid perspective. Because so much about how AI applications work in healthcare depends on the technology, the algorithms, the quality of the data, the deftness with which the tools are integrated into care processes.
But there's little doubt that when those things are approached in the right way, AI and machine learning have a lot to offer healthcare diagnosis and decision support – even if those insights should be given the proper weight when it comes to developing treatment plans.
Still, plenty of physicians and clinicians have skepticism to spare, if not outright hostility, said Jeff Axt, project manager and systems analyst in the IT department at the Hospital for Special Care in New Britain, Connecticut.
At the HIMSS18 Machine Learning & AI for Healthcare event in Las Vegas on March 5, Axt will offer a presentation exploring how AI is disrupting clinical practice, and showing his strategies for managing that cultural change and implementing it effectively.
Key to doing it right, he said, is knowing where the technology should be inserted into a workflow, avoiding the risks of overselling its capabilities, and highlighting its value as a tool to improve diagnosis and lower risk of error.
Axt has been researching clinical applications of AI technology for more than a decade, exploring the ways artificial neural networks could help diagnose mild traumatic brain injury and assess the prognosis for spinal cord injuries.
Over the course of his research (his doctoral dissertation is titled "Artificial Neural Networks: A systematic review of their efficacy as an innovative resource for healthcare practice managers") Axt has learned a lot about the capabilities and limitations of AI technology.
"Human beings are boatloads more complex than we get credit for," he said.
Axt's particular area of focus is researching ways to bringing AI into actual practice. When it comes to working the applications into clinical workflows, a key first step is knowing where to deploy the applications, he said.
"The first rule is engaging the clinician at the point of decision," said Axt.
That's in keeping with the so-called "Five Rights" of clinical decision support, first articulated by clinical informaticist Jerome Osheroff, MD: The right information (evidence-based guidance, response to clinical need) to the right people (entire care team – including the patient) through the right channels (e.g., EHR, mobile device, patient portal) with the right intervention formats (e.g., order sets, flow-sheets, dashboards, patient lists) at the right points in workflow (for decision making or action).
It's key, said Axt, "to find a way to engage the AI at that point in the decision-making process" where it can be most effective.
But just as important, he added, is "allowing the clinician to control" that decision-making process.
"You can't force the issue," said Axt. "You can't just say, 'Surprise, this is what the diagnosis is.' A clinician will slam their fist down on the table, turn you off, say, 'See ya' and go return to what they've been doing – practicing for 30 years."
That sort of credence in the decision-making power of machines might just be what scares people most.
It's critical to differentiate between the "generalist approach to the human brain, versus the specificity of an AI technology," said Axt. "There's a discrepancy there that many AI developers don't understand. It's taking into consideration components that the AI system cannot manage and cannot integrate into its network or its algorithm. Thirty years of experience has great value."
At the same time, however, "human beings are just as susceptible to heuristics and biases that will lead them astray a certain percentage of the time," he said.
So the correct way to use AI is to look at it as a consult, much as you would in many complex medical issues – you'd go find a physician that practices similarly and see what they have to say.
AI should be deployed in the same way.
"It's providing the clinician with a consult that's really easy to access, they don't have to worry about scheduling time and they don't have to redo the examination. You've got the data. Process it through that algorithm and see what it says, he added.
The technology may say, "You may want to rerun that lab test." Or it may say, "You know, what's coming up is exactly what you've got. Let's go forward."
The key is that the "final decision must be in the hands of the clinician," said Axt. "That's where most of the fear – in the literature and what I'm seeing in practice – is coming from. One of my clinical managers once said, 'I'm really good at diagnostics, don't tell me what to do.' And he's right, he really is good at diagnostics. We have to recognize that."
But that doesn't mean an AI algorithm can't also offer some valuable perspective.
"I look at the AI as a consult – in a human context, representing something like a physician assistant or advanced practice registered nurse might represent: Something that has skill sets and abilities – not quite to the level of the physicians or the radiologist, but certainly to the extent that there is value to that input."
Implementing AI into Clinical Workflow: Do This, Don't Do That, is scheduled at 3:05 p.m. March 5 at Machine Learning & AI for Healthcare event at the Wynn Las Vegas.
An inside look at the innovation, education, technology, networking and key events at the HIMSS18 global conference in Las Vegas.