What hospitals should consider when choosing AI tools
Some healthcare organizations are turning to artificial intelligence and machine learning because of the enhancements these advanced technologies can make to patient care, operations and security. But assessing the promises of the technologies can be difficult and time-consuming, unless you’re an expert.
Two such experts weigh in with insights hospitals should understand when both planning and purchasing AI tools.
Raj Tiwari is chief architect at Health Fidelity, which uses natural language processing technology and statistical inference engines, mixed with analytics, to identify and correct compliance risks, and Brent Vaughan is the CEO of Cognoa, a company that develops AI tools for diagnosing medical conditions.
Their advice: Know that AI and machine learning are augmentative tools, understand that size matters among data sets, real-world applicability is a must, and the tools must be trained and validated.
To draw a baseline, at this point in time the An aspect of AI is more akin to augmented intelligence than artificial and, as far as machine learning is concerned, hospitals should think about it as a supplement to human expertise, experience and decision-making.
“AI is a tool that enhances our capability, allowing humans to do more than what we could on our own,” Tiwari added. “It’s designed to augment human insight, not replace it. For example, a doctor can use AI to access the distilled expertise of hundreds of clinicians for the best possible course of action. This is far more than he or she could ever do by getting a second or third opinion.”
That needs to be done by analyzing AI recommendations carefully. A lot of buzz around AI and machine learning is from creators of AI tools. That’s understandable because this group is focused on what AI can do to improve healthcare and other realms.
“People who implement and deploy real-world solutions based on AI need to ask big-picture questions,” Tiwari said. “Specifically, how does it assist the end-user. AI should be treated as one of the many tools at the disposal of the user, not the definitive solution.”
Healthcare organizations need to make sure the team that developed their AI tools has a deep enough understanding of the relevant industry, Cognoa’s Vaughan said.
“Many people in the machine learning and AI world, especially consultants, feel that great AI can be developed without requiring deep domain knowledge – they will say that their AI solution is ‘domain agnostic,’” Vaughan said. “Many would not agree – and in healthcare, this can particularly be untrue.”
Healthcare data sets, in fact, are often much smaller than in other consumer and business applications. Unlike AI tools that deal with serving up ads or picking one’s next movie based upon tens of millions of data points, healthcare AI tools often rely on data sets orders of magnitude smaller and thus require that the AI developers have a deeper industry knowledge and understanding of the data, because coding mistakes and date misinterpretation are amplified in smaller data sets.
Real-world applicability is a must. One of the biggest challenges to machine learning adoption across the healthcare industry is scalability, Tiwari said.
“An algorithm may work flawlessly in the controlled academic or limited clinical setting, but translating that to the real world can introduce any number of complications,” he said. “For example, if the tool is trained by using data from a research hospital, it may not function well in a regular hospital where many patients have incomplete medical records.”
They may have critical pieces of data missing, and the tool would need to be able to account for that. Data cleanliness and processing speed can be hurdles outside the neat environment of research applications.
Healthcare organizations also need to make sure their AI tools were trained and validated with representative populations, Vaughan said.
“Since the training and validation data sets often are much smaller in healthcare, the differences between populations can become exacerbated,” he explained. “For example, primary and secondary or tertiary care settings can see dramatically different incident rates for different events. An AI tool that is good at predicting a particular outcome in one setting might have a much higher error rate in the other setting.”
And AI tools in healthcare must help meet security and compliance commitments, Tiwari said.
“As we build and leverage machine learning models, software vendors and organizations that implement them must be cognizant of data compliance and audit requirements,” Tiwari said. “These include having appropriate usage agreements in place for the data being analyzed.”
Having adequate permissions in place goes without saying; commitments to patient data privacy and security are a must. In certain cases, machine learning systems can inadvertently leak private information, Tiwari explained.
Such occurrences could be disastrous and significantly hinder further adoption of AI and machine learning out of fear.