Eric Topol: EHRs have 'taken us astray,' but AI could fix healthcare in a 'meaningful and positive way'
Dr. Eric Topol, founder and director of Scripps Research Translational Institute, is a longtime healthcare visionary. A cardiologist, geneticist and digital health pioneer, his ideas have been at the forefront of healthcare technology for decades.
Topol's research and reporting on emerging tech, data, devices, personalized medicine and more have been explored in four books – the most recent of which, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, was published today.
Healthcare has made some big strides with information and technology, says Topol, but too much of that has been to its detriment. Technology has helped make the practice of medicine "robotic," he writes, "to disastrous effect."
Physicians are burned out, patient experience is suboptimal, to say the least, and medical errors abound. But artificial intelligence holds the potential to help fix all of that.
"If we exploit machines' unique strengths to foster an improved bond between humans," he says, "we'll have found a vital remedy for what profoundly ails our medicine today."
There are big challenges ahead, and healthcare professionals will "need to be prepared to fight against some powerful vested interests, to not blow this opportunity to stand up for the primacy of patient care," Topol says.
But properly and humanely deployed, AI and and machine learning have to potential to restore efficiency to a wide array of burdensome healthcare processes, freeing up physicians to treat their patients in the way they deserve, he says. "The path won't be easy, and the end is a long way off. But with the right guard rails, medicine can get there."
Topol spoke with Healthcare IT News recently about the healthcare's "totally depersonalized" current state – and the potential for AI to help restore its humanity.
Q. A recent report showed a surprisingly high number of companies that call themselves AI startups don't actually use AI in any significant way. But despite all this hype and hot air – and there's a lot of it – it seems clear that artificial intelligence is here to stay. And you clearly feel that it will be a net positive for healthcare, despite some very real hurdles that need to be overcome.
A. There isn't a question about the hyperbole, and I deeply review that in the book: We're long on promise and short on validation – on real data, on readiness for use. So that's the main point, that if we really go after this, and embrace the potential and do it right, we can get over this hype phase and really get into changing the future of healthcare in a very meaningful and positive way.
Q. Explain what you mean by these concepts of "shallow medicine" and "deep medicine."
A. Shallow medicine is rife with problems such as lack of time. So there's very little human contact. And in the minutes when it occurs, it's compromised by keyboard and screen and minimal, if any, eye contact. So it's totally depersonalized and that's a big problem. But moreover, the clinicians involved – doctors, nurses and other health professionals – suffer from burnout. More than half of them, the peak ever. As well as clinical depression and a high number of suicides, even. The disillusionment can lead to a doubling of errors, or a more than doubling of errors. That's been amply proven.
There's more than 12 million medical errors in the U.S. Serious ones. Diagnostic errors. They're one of the leading causes of death. We don't have very good picture right now for how healthcare is being practiced, and that's why I believe it's shallow.
So you have this vicious cycle of people who feel they've lost their way, making more errors. And then even further disillusionment: You have patients who feel that they're not being cared for because there's so very little time – less of a bond, and the sense of presence and trust and that whole precious relationship that has deteriorated.
Because we don't understand each human being, we don't integrate all the data, we don't use our resources efficiently, we overdo things. It's full of waste. At least a third of all health care in the U.S. is deemed wasteful. We can do better than this. And that's what deep medine can bring us.
Q. Technologically, at least, we've come a long way in just a short time, relatively speaking. We're talking about AI now, but less than 10 years ago the industry was talking about how to get hospitals to even adopt basic electronic health records ...
A. I think electronic health records have been the singular biggest disaster in the last two decades of healthcare. The idea was right, but the execution has been pitiful. They're set up for billing purposes. They couldn't care less about the patient-centric vision. They're pathetic. They're the worst software in existence that I know of. They've really taken us astray, and have given the whole digital era in healthcare a bad name. They're uniformly hated by patients and doctors, because they involve such poorly-performing user interfaces, and are the single worst part of the deteriorating doctor-patient relationship.
Q. So what should be done to improve EHRs, to help position them to capitalize on the emergence of AI and enable a better physician and patient experience? Isn't the data in EHRs critical to the success of machine learning?
A. We can't really do deep learning for individuals without each person having all of their data. Now, that is complicated because more and more people are going to be generating their own data through wearable sensors – wearable medical-grade sensors – and also doing their own lab tests through their smartphones. So that's one place where there's no home for that data.
Then there's genomics and your microbiome and these other biologic layers of data. But people don't want that in their medical chart, their traditional medical record. So that's another area that's "homeless" today.
But people can't even get their medical records – all of them – and they need to have them from the time they're in the womb, all the way to the present moment. No one has that in this country, essentially. And that is vital to having the input for deep neural networks to get the best output. But we are so far from that point and that requires a policy change that people own their medical data. It's their data, it's their body and it's important for their health.
Q. Policy makers at least seem to be thinking along those lines – like the information blocking rule that was published this past month, for instance. Are you at least heartened by proposals like that?
A. No, I'm not, unfortunately. Because it's not just information blocking which has been going on for years. It's the whole paternalism of the medical establishment – that hospitals and health systems and doctors own people's data. And patients they can't get their data. Even if there wasn't information blocking, which is still rampant, despite whatever regulations come down from the mountain, in practice people can't their data. I can't even get my data. So this is a really pathetic situation, because it's basically the business of medicine. And the last person that the business of medicine really cares for is the patient.
Q. So clearly some pretty big changes need to take place. But beyond technology, they require changing some deeply entrenched habits – that's obviously a challenge.
A. Well, in countries like Estonia, people own all their data. And it's highly secure, and you can share parts of it or all of it, whenever and with whomever you want to. You can donate it for medical research. You can sell it. It's your data. And there are other countries that are following that lead. But here in the U.S. there's no movement on this.
And there's no AI strategy. I mean, I just finished a extended commission by the U.K. government to plan for AI in the U.K. for the NHS. Here's a country that's putting in billions of pounds of their resources and their workforce to integrate AI.
We're doing nothing. We have we have a zero-dollar investment in this country. The recent Trump declaration that it's a top priority was accompanied by zero resources.
Q. And it had very few specifics.
A. And very few specifics. Exactly.
Q. But clearly you're optimistic about the big picture for AI in healthcare. It's why you wrote the book.
A. Oh, I am excited about it. I'm just worried that it won't be happening in this country. It's happening already in China – although that's a completely different culture, where the government has all the people's data. So at least they have it all gathered, you know. And there is no real privacy, really. So, OK – that's not exactly what I would consider a model. But they are implementing AI at scale. They have immense amounts of data, and the will and the resources and the plan.
But in places like the U.K. and other countries that I've been connected with one way or another, they are making a very deliberate strategy too. This is the greatest potential we have to fix what ails us in healthcare. We're still a long way from that. But when you think about the waste, the inefficiency, the lack of productivity, the horrible workflow that we have – never mind the relationship between patients and their physicians. All these things. There's a remedy in store. It's out there, dangling.
But going after it needs planning. Because AI could also make everything worse. It could make inequities worse, it could make the burnout worse, if we don't plan this.
Q. Let's talk more about the negatives in just a bit. But back up and pretend that all of this AI is coming to a hospital near you tomorrow. There's so many different applications: predictive algorithms, digital pathology and diagnostics, patient-facing chatbots. What are some of the use cases you're most excited about, and think could have the most immediate impact?
A. Well, I think one thing you've missed is that it's not just a hospital-doctor story. It's also a person story. I mean, we've already had smartwatches with heart rhythm detection out there with a deep learning algorithm, FDA-approved for consumers. And there's going to be a lot more of that. If we have people diagnosing their skin lesions or their child's ear infections and all these other reasons for going to a doctor, that's going to be huge. That's a big part of healthcare: the routine, not-serious but important things that need accurate, rapid, inexpensive diagnosis.
Longer term, the biggest thing of all is remote monitoring and getting rid of hospital rooms. And there, the opportunity is extraordinary. Because obviously the main cost in healthcare is personnel. And if you don't have hospital rooms, you have a whole lot less personnel. So setting up surveillance centers with remote monitoring – which can be exquisite and very inexpensive with the right algorithms, when it's validated – would be the biggest single way to improve things for patients, because they're in the comfort of their own home. They're not subject to the risk of nosocomial infections. That would be very well appreciated by patients if it was proven to be safe and effective.
Shorter term, the biggest thing for clinicians is getting rid of the keyboard. Completely. Complete liberation from the keyboard. Which is attainable.
Q. And what then are some of the areas that give you greatest concern? At HIMSS19 I saw a panel discussion with folks from AMA and Cleveland Clinic and Microsoft, all talking about some of the big ethical questions that need to be ironed out with AI. To say nothing of the patient safety risks if it's not integrated into clinical workflows. What are some of the areas where we need to tread especially carefully?
A. I have a whole chapter on "deep liabilities" in the book, and I emphasize that this is a significant problem. Inequities are a very severe problem in the U.S. especially, and this could make things much worse if these tools are only provided for affluent people. We're talking about cheap chips and software, which could be made freely available to those who are in need. And that should be a whole different way going forward than the way it has been.
But there is no shortage of concerns here. Privacy and security is at one level. The explainability of algorithms – the "black box" – is another. And the ethics of deployment of algorithms. There's so many issues that have to be grappled with. But they all have a soluble path. It just means you've got to work on it.
Q. Do you think that healthcare, which has no shortage of other problems to work on right now, is ready for something this big?
A. Well, if it wants to get over its state of mess and ridiculous expense and deterioration of the human-to-human bond, it has to. Because it's going south so fast, and so extraordinarily poorly. We keep investing more in human capital, with worse human outcomes. That is the worst business model I've ever heard of.
And we keep doing it. Instead of getting machines to help the performance of clinicians, whether it's radiologists, pathologists, dermatologists, ophthalmologists – in every walk of life in healthcare, instead of doing that we just keep hiring more people.
It accounts for the largest fraction of all this job market hotness that we're in right now. We're still adding more jobs. The number one economy is healthcare jobs. And we're not doing what we can to reduce the reliance on human capital, but rather start to train machines to do the tasks. And that includes administrator tasks: billing, coding, all the back office operations.
Q. We're getting there, bit by bit, on the clinical side. FDA has approved and more devices and apps, for instance, and has made changes to its regulatory processes for AI tools. What else should other policymakers be doing to get ready for this new era?
A. In the U.S.? I don't know anything that's being done. I mean, Scott Gottlieb just resigned. I can't believe this. He was like the best part of the entire administration. This is a big blow. Oh my gosh. Wow.
Q. So you seem to take a pretty dim view, then, of where we are in the States compared to the rest of the world.
A. Well, I mean, I've worked a lot with the U.K. I've worked a lot with New Zealand. I'm familiar with the scene in China. And I did a global survey for the book, in the chapter on health systems. So, you know, these days we're just trying to survive, day to day, here in this country. Healthcare is just eating the economy: 18 percent today, 20 percent tomorrow, there's no limits. And there's no planning. No cohesive strategy. And we just watch one thing after another: whether it's pharma prices, the insulin and the specialty drugs, or the healthcare jobs. I mean, every metric. We're the only advanced country in the world where life expectancy has gone down three years in a row.
Q. You think AI could help address some of this stuff. But we spoke earlier about radiologists and dermatologists. What do you say to those physicians who are saying, "I don't want to see my job become obsolete thanks to an algorithm," or "I don't want a smartphone app that can take a picture of mole to be putting me out of business"?
A. You need to think about the synergy. I mean we already have enough data to show that we can markedly augment the performance of physicians, so that radiologists don't miss things. And overall, machine learning can be trained to see things that humans can't even see.
I mean, to be able to look at a 12-lead cardiogram and tell what that what the heart function is, or whether the person is likely develop a heart a arrhythmia. We can't see these things. To be able to look at a smartwatch to be able to tell the blood potassium level without any blood.
All these things that you know can be done through machines now should be embraced, so human performance will be greatly augmented. Productivity, efficiency and workflow can all be substantially improved. Rather than say, "It's going to take my job," realize that, no it's not, it's going to make your job so much better. It gives, potentially, the ultimate gift of time. Time to spend with patients and to be able to deliver better care. And that's why we all went into medicine in the first place.
Healthcare IT News is a HIMSS Media publication.