Room for improvement in EHR quality measurement, says Weill Cornell study
Crucial to enter data in right place, so it's captured by reporting algorithmsNEW YORK | January 16, 2013
A new study by Weill Cornell Medical College spotlights the need for improving quality measurement from electronic health records.
The study, whose results are published in the Jan. 15 issue of Annals of Internal Medicine, shows how EHRs, designed to document clinical care for individual patients, can do better when it comes to gauging the quality of that care.
Funded by the Agency for Healthcare Research and Quality (AHRQ), the research took a cross-sectional look at a health network in New York. Its findings show that the accuracy of quality measures can vary widely, and that electronic reporting can both underestimate and overestimate quality.
Meaningful use incentives are based, in part, on the ability to electronically report clinical quality measures. By 2014, providers will be expected to document and report care electronically; the next year, they will face financial penalties if they don't do so. But the Weill Cornell report suggests there's much work to do in the meantime.
"This study reveals how challenging it is to measure quality in an electronic era," said the study's author, Rainu Kaushal, MD, director of the Center for Healthcare Informatics Policy, chief of the Division of Quality and Medical Informatics and the Frances and John L. Loeb Professor of Medical Informatics at Weill Cornell, in a statement. "Many measures are accurate, but some need refinement."
Improving quality measurement "is critically important to ensure that we are accurately measuring and incentivizing high performance by physicians so that we ultimately deliver the highest possible quality of care," she added. "Many efforts to do this are under way across the country."
Weill Cornell researchers analyzed clinical data from the EHRs of one of the largest community health center networks in New York state. They examined the accuracy of electronic reporting for 12 quality measures, 11 of which are included in the federal government's set of measures for incentives. They found fairly good consistency for nine measures, according to the report, but not for the other three.
The automated reports generally performed well, according to researchers. But they underestimated the percentage of patients receiving prescriptions for asthma and receiving vaccinations to protect from bacterial pneumonia.
A third measure suggested that more patients with diabetes had cholesterol under control than actually did. The automated report said 57 percent of eligible diabetic patients had cholesterol controlled, while a manual check of the charts showed it was actually only 37 percent. Part of the problem is that physicians and nurses entering data in EHRs may be typing in fields that aren't being captured by quality reporting algorithms, the study suggests.