HIMSSCast: AI assurance labs and improving quality
The idea behind artificial intelligence assurance labs – where large language models and other AI technologies can be simulated and tested – is that they can improve the efficacy and fairness of predictive analytics, disease detection, decision support and other healthcare AI tools.
The role of AI assurance labs in frameworks, like health IT certification regulations promulgated by the U.S. Health and Human Services, aiming to establish healthcare AI trust and transparency has been a government and industry goal for the year ahead.
Several groups and federal agencies have been working on operationalizing AI to ensure patient safety and reliability under the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence – now revoked by President Donald Trump – including the Coalition for Healthcare AI. Some lawmakers stated reservations about CHAI, but federal health AI leads said their work with the group was complete and departed their CHAI roles in July.
In this week's HIMSSCast, Brigham Hyde, CEO of Atropos Health and member of the Generative AI Work Group, said he believes that a lack of AI testing and evaluation standards could prevent providers from implementing advanced disease and risk-prediction models. He explained why he thinks partnerships to evaluate the quality of machine learning algorithms from the outset are very important, what the benefits are and where healthcare AI innovation is going.
"The burden, I think, that should be on companies like mine and others is to show what the expected action and benefit is," he told Healthcare IT News.
Like what you hear? Subscribe to the podcast on Apple Podcasts, Spotify or Amazon Music.
Talking points:
- Why developing standards is so important.
- The role AI assurance labs play in helping to ensure health equity.
- How providers who deploy ML platforms could benefit from AI assurance processes.
- Stifling innovation, and how that may benefit larger companies.
- Controlling agentic workflows and the trajectory of healthcare AI in 2025.
- Balancing data quality testing costs and model transferability.
More about this episode:
CHAI launches open-source healthcare AI nutrition label model card
Republicans want changes from HHS on AI assurance labs
Explainer: Thinking through the safe use of AI
MITRE, UMass launch health AI assurance lab
Looking ahead to emerging AI regulations and more
Can government and industry solve racial bias in AI?
Andrea Fox is senior editor of Healthcare IT News.
Email: afox@himss.org
Healthcare IT News is a HIMSS Media publication.