Christiana Care offers tips to 'personalize the black box' of machine learning
Update: HIMSS20 has been canceled due to the coronavirus. Read more here.
For all the potential benefits of artificial intelligence and machine learning, one of the biggest – and, increasingly most publicized – challenges with the technology is the potential for algorithmic bias.
But an even more basic challenge for hospitals and health systems looking to deploy AI and ML can be the skepticism from frontline staff – a hesitance to use predictive models that, even if they aren't inherently biased, are certainly hard to understand.
At Delaware-based Christiana Care Health System, the past few years have seen efforts to "simplify the model without sacrificing precision," says Dr. Terri Steinberg, its chief health information officer and VP of population health informatics.
"The simpler the model, the more human beings will accept it," said Steinberg, who will talk more about this notion in a March 12 presentation at HIMSS20.
When it comes to pop health programs, the data sets used to drive the analytics matter, she explains. Whether it's EHR data, social determinants of health, claims data or even wearables information, it's key to select the most relevant data sources, use machine learning to segment the population – and then, crucially, present those findings to care managers in a way that's understandable and fits their workflow.
At HIMSS20, Steinberg, alongside Health Catalyst Chief Data Scientist Jason Jones, will show how Christiana Care has been working to streamline its machine learning processes, to ensure they're more approachable – and thus more liable to be embraced – by its care teams.
"The goal is to simplify the model as much as you can, so human beings understand the components."
Dr. Terri Steinberg, Christiana Care Health System
They'll explain how to assign relative value to pop health data and discuss some of the challenges associated with integrating them; they'll show how ML can segment populations and spotlight strategies for using new data sources that will boost the value and utility of predictive models.
"We've been doing this since 2012," said Steinberg. And now we have significant time under our belts, so we wanted to come back to HIMSS and talk about what we were doing in terms of programming for care management – and, more important, how we're segmenting our population with machine learning."
"There are a couple of patterns that we've seen repeated across engagements that are a little bit counter to how people typically go about building these models today, which is to sort of throw everything at them and hope for the best," said Jones, of Health Catalyst, Christiana Care's vendor partner.
At Christiana Care, he said, the goal instead has been to "help people understand as much as they would like about how the models are working, so that they will trust and actually use them.
"We've found repeatedly that we can build technically fantastic models that people just don't trust and won't use," he added. "In that case, we might as well not bother in the first place. So we're going to go through and show how it is that we can build models in such a way that they're technically excellent – but also well-trusted by the people who are going to use them."
In years past, "when we built the model and put it in front of our care managers and said, 'Here you go, now customize your treatment plans based on the risk score,' what we discovered is that they basically ignored the score and did what they wanted," Steinberg explained.
But by simplifying a given model to the "smallest number of participants and data elements that can be," that enables the development of something "small enough for people to understand the list of components, so that they think that they know why the model has made a specific prediction," she said.
That has more value than many population health professionals realize.
"The goal is to simplify the model as much as you can, so human beings understand the components," said Steinberg.
"People like understanding why a particular individual falls into a risk category," she said. "And then they sometimes would even like to know what the feature is that has resulted in the risk. The take home message is that the more human beings understand what the machine is doing, the more likely they are to trust the machine. We want to personalize the black box."
Steinberg and Jones will talk more about making machine learning meaningful at a HIMSS20 session titled "Machine Learning and Data Selection for Population Health." It's scheduled for Thursday, March 12, from 10-11 a.m. in room W414A.