Testing algorithms key to applying AI and machine learning in healthcare
Artificial intelligence and machine learning systems are gaining a lot of ground in healthcare, with everyone from tech giants like Google and Amazon to startup companies building systems for healthcare provider organizations.
Claims about algorithms beating physicians at their job are made every month. But are all algorithms made equal? Will investing in machine learning result in meaningful gains for an organization?
The Algorithm Science team at Partners Connected Health invests a great deal of time thinking about the right questions, working out potential pitfalls and developing best practices in evaluating machine learning based solutions. Sujay Kakarmath, MD, post-doctoral research fellow at Partners Connected Health/Harvard Medical School, offers advice to healthcare organization CIOs and other IT staff on working with algorithms.
“From our experience, most healthcare organizations do not evaluate algorithms in the context of their intended use,” Kakarmath said. “The technical performance of an algorithm for a given task is far from being the only metric that determines its potential impact. Evaluation of the true cost of implementing an algorithm should take into account factors such as the technical infrastructure and human resources required, cost of acting on false positives, cost of inaction on false negatives as well as decay in algorithm performance that occurs in diseases where medical science is evolving rapidly, such as cancer.”
Kakarmath and his colleagues employ best practices for evaluating machine learning-based systems. It’s necessary for technology that is so new and evolving.
“We like to shift our focus from the algorithm to the underlying problem it intends to solve,” Kakarmath said. “This seemingly simple change in mindset yields the greatest dividend in the long run. Our interdisciplinary team consists of data scientists, physicians, software engineers and human-centric design experts who spend a lot of time consulting with intended end users of the algorithm – physicians, nurses and case managers – to understand how an algorithmic solution might fit into their workflow.”
Following this, they conduct external validation studies to evaluate the robustness of the algorithm’s performance under varying data conditions. This “crash test” of sorts is designed to reveal any fragility in the algorithm’s performance that would be expected during its implementation in any real-world electronic health records system.
“This information is then factored into a benefits analysis that looks at clinical, efficiency, equity and cost outcomes – metrics that a healthcare organization is really looking to improve upon when investing in an algorithmic solution,” Kakarmath explained.
Kakarmath will be speaking on this subject at the HIMSS and Healthcare IT News Precision Medicine Summit May 18 at the Grand Hyatt Washington in Washington, D.C. Register here.
HIMSS Precision Medicine Summit
Accelerating precision medicine to the point of care is focus of the summit in Washington, D.C. May 17-18.