Is meaningful use stifling innovation in health IT?
I write this post after reading Margalit Gur-Arie's excellent post on Alternative Health IT. She's one of those people who when they write, I not only listen, but I make sure others know about her posts automatically via Twitter. It's a short list.
I'm a bit challenged by her post, because in many ways I agree, and in others, I disagree. What I find stifling is the pace at which meaningful use is proceeding. When you put an entire industry under the MU pressure cooker, the need to meet federal mandates overwhelms anything else. The need to develop software that is able to support a large number of externally controlled mandates can, and in many cases, has resulted in bad engineering. You can't innovate well on a deadline. It's not a well-understood repeatable process (actually it is repeatable, but few are able to define and execute on it, but that's for another post). What results is often "studying to the test", and neither developers nor end users never really learn the lessons that meaningful use is attempting to teach. I've seen multiple times where developers produce a capability that meets the requirements of the test, but which fails to meet the requirements of the customer. See John Moehrke's excellent analysis of what it takes to pass encryption tests in Stage 1. If you do JUST what it takes, you can wind up with something that customers don't need and won't use.
In some ways, the test itself is to blame. But in other ways, it is our attitudes on what that software is supposed to do that is to blame. Margalit makes several points about the utility of gathering family history and smoking status for patients for whom other things are more important. One of the first rules of care in the ED is to stabilize the patient. That means that other things (like capturing medical history) should wait. If your EHR forces you to a workflow that doesn't support that necessity in the ED, by all means, replace it. If your process requires the capture of data for every single patient that you don't always need , perhaps you should revise it. The metrics in meaningful use require that for patients admitted (in a hospital) or treated (in an ambulatory or ED setting); 80% have smoking status and 20% have family history recorded. This isn't an all or nothing measure. MU isn't saying do it every single time, but it is saying that this should be part of your practice for most or some patients. I agree the numbers might be usefully adjusted for different settings, but arguably it is also lot less costly and challenging (for the government) to set one measure for everybody. Is it fair to consider that X is too high a number? Possibly. Is there a number that would make everyone happy? Hell no. So we pick one and live with it, and move on. It's NOT quite as scary or as stupid as it might seem.
The test for providers is more challenging than the test for the developers. It is made of a couple of dozen questions, and each one is a pass fail question, and you have a year, or 90 days, or whatever, before you find out if you've passed (although you can monitor it). Failing on any single question results in failing the test overall. This would be like having a class in which you given 10 tests over the course of the semester, and you assign to the student at the end of the year the lowest grade of any single one of the tests. I think we'd be better off with a bunch of pass/fail questions and a set metric of what score is needed to pass overall (That's what the menu options do, but few actually see it that way because there often aren't enough of them to be relevant choices).
On where Meaningful Use is succeeding in developing innovation, I think there are a few places of note. Blue Button Plus supports unprecedented patient access, and while NOT directly required by name in meaningful use Stage 2, is built from readily accessible components and requirements that are present in Stage 2. I'm speaking specifically of the View, Download and Transmit requirements in the incentives rule, and on the standards side, Consolidated CDA [arguably a refinement of an innovation produced several years ago], and standards applied in the Direct Project. Stage 3 has much more to offer I think, even though we are just starting to get a handle on what it might look like. The Query Health initiative did some really innovative work that supports not just its particular use case (health research), but also automation of quality measurement using HL7's HQMF. If you think developing a declarative means for specifying quality measures (and using that for research as well) isn't innovative, you certainly haven't been viewing some of the challenges we were trying to solve from my perspective.
It's not all bad. It's not all good. But overall, I think the end result does not paint quite as depressing a picture. And it is just one program. It is the biggest one we have right now, but that's about to change. The ACO rule is kicking in, and we are starting to see providers (who now have an EHR due to meaningful use), start to look at real innovations that support better care. For them, there's only one test score (how much savings there is at the end of a term), but they get to define the curriculum and how it will be learned.