Getting to accurate electronic clinical quality measures can be a challenge
Electronic clinical quality measures were created years ago as part of the federal EHR Incentive Programs, or meaningful use. At the time, eCQMs held the promise of automatically extracting the discrete data elements required to capture a specific measures of clinical quality, without human intervention.
The intent of the initial set of eCQMs was based on capturing processes of care provided to patients to determine if a patient received quality care based on evidence-based practices. But 10 years after eCQMs first were proposed, EHR vendors and healthcare providers still struggle with extracting the existing eCQMs in an accurate way that reflects the quality of care.
Joseph M. Kunisch, RN-BC, enterprise director of clinical quality informatics, regulatory performance, at Memorial Hermann Health System, has long experience with eCQMs. He shared some lessons learned in a recent HIMSS20 Digital session, Challenges of Capturing Clinically Accurate eCQM Data.
A computer-coded algorithm
“An eCQM is a computer-coded algorithm to extract discrete data elements that define the population to be measured and the key data elements that will be used to measure the quality of care,” Kunisch explained. “All of this criteria is based on discrete data only, meaning there has to be a numerical computer code attached to that data element.”
The original method for quality measure extraction was called “chart abstracted.” A person, typically a nurse, read a chart, answering questions. The eCQM automatically extracts, without human review. In the chart abstracted method, there are hundreds of pages of very detailed instructions for a person to read and interpret. With eCQMs, it is the computer-coded algorithm using a specialized computer language.
“One of the most challenging differences: For chart abstraction, the abstractor was able to use all information from all available sources, including text-based documents and scanned images,” Kunisch said. “In contrast, in the eCQM, the data from the various information systems can be useful but must be in a coded discrete data format. This significantly limits where you can get the data in an electronic health record.”
In the evolution from chart abstracted quality measures, the EHR basically replaces the human abstractor; thus, IT staff must understand how the computer now abstracts that data. And unlike the person who can explain the logic they used to find that information, the computerized EHR cannot.
An eCQM example
“As an example, VTE-6, which is an incidence of potentially preventable hospital-acquired venous thromboembolism – this measure looks for patients who developed a clot in their large vein called the thromboembolism,” Kunisch explained. “This can cause severe health risk, for example, if the clot breaks loose and travels to the lungs. The worst outcome is death, but at the very least you will require additional medical intervention and possibly suffer results for the rest of your life.”
The quality measure here looks at all patients who develop a VTE while in the hospital or wherever these patients were assessed for risk of developing a VTE, and if they were at high risk, were they treated for the VTE using either a medication or a medical device.
A human abstractor reviewing the chart is trying to answer if the VTE was there before or after getting to the hospital – if in the hospital, did the clinician properly assess the patient for the risk factors, and based on those risk factors, did the clinician order the appropriate VTE prophylaxis regimen?
If the patient came into the hospital with the VTE and the care provided at the hospital did not cause it, the hospital will pass the quality measure. If the patient developed the VTE in the hospital, the hospital is responsible for causing it and would fail the measure – unless the clinician ordered the appropriate treatment, and in that case, the hospital would pass the measure.
This fairly straightforward review process is not so easy in the world of eCQMs. The algorithm in the VTE example contains 23 and/or or and/not statements. Each one of these is a potential failure point. This can make it very difficult to capture all of the data elements accurately, and in many cases if one of these lines of computer code fails, the entire measure can fail and portray an inaccurate picture of what is actually happening at the bedside.
Disparate information systems challenge
Another big challenge for managing eCQM reporting comes when providers use disparate information systems, Kunisch noted.
“In our case, we had a separate diagnostic radiology system where the radiologist dictated their findings,” he said. “This report came over to the EHR as a text image report. Again, a human abstractor would easily read this report and exclude the patient because it was clear the patient already had a VTE. But again, in the eCQMs, it’s not able to read that text so therefore you cannot use it to exclude the patient.”
Again, a coded problem or diagnosis must be entered within the 24-hour time period, meaning a clinician has to actually enter that ICD-10 diagnosis or SNOMED problem in that first 24 hours.
“So let’s say you ask the providers to enter the diagnosis when they read the report and they agree to do this – you cheer because you now have a solid way to capture the right code at the right time,” Kunisch said. “That’s what my team expected, but when we ran our reports we still were not capturing this population of patients that should have been excluded from the quality measure because there was that evidence that they had the VTE prior to the hospitalization as shown in that text report.”
Value sets that support eCQMs
It turned out there was another problem Kunisch and staff did not anticipate: data constrained by value sets. Value sets are groups of discrete coded data elements that support the eCQMs. If a clinician goes to the value set authority and enters VTE, it will find all the data sets for each data element. In the case of diagnosis codes, one can see the terminology system that is used, and more important, the number of values in each set. Why is this number important? Because one has to make sure they match up to the values in the EHR.
“In most EHRs, you have the ability to map specific data elements from your hospital’s EHR data to the vendor’s EHR data element,” Kunisch explained. “But one of the challenges with this is some of the EHR vendors’ content is hard coded, meaning you can’t make changes, so not all content is available in these mapping tools. This becomes a problem on the front-end interface the clinician uses.”
Imagine a busy ER physician who is asked to enter the diagnosis. She enters VTE into the EHR and gets a long list of choices. To further complicate matters, all of them are not ICD-10 codes because instead they are intelligent medical objects, terms that are more understandable for clinicians. The ICD-10 codes or SNOMED problems are mapped to those IMO codes. So what does a typical ER physician do? She picks the first one on the list or the first one that is closest to what she matches, Kunisch said.
“What we discovered is the top choice in the search box on the left, which was selected most by our clinicians, was not the one mapped to any of the coded values in the eCQMs value set,” Kunisch revealed. “The query then failed and it looked like the patient then developed the VTE in the hospital because that critical ICD-10 code was not entered in the first 24 hours.”
eCQMs not showing reality
Memorial Hermann Health System pulled more than a year’s worth of cases that were coded with the correct VTE ICD-10 codes. Out of the 3,800 cases, not one was captured by the eCQMs showing the VTE was present on admission. In contrast, the coding department that uses all available documentation uses present-on-admission indicators so they knew the VTE was present on admission.
So in essence, if the health system used the eCQMs to gauge performance, it would have looked like the hospitals were responsible for 96% of those VTE cases, when in reality, in the chart abstracted performance of the same measure, the hospitals were responsible for less than 2%, and most were at zero.
“One way to help mitigate these challenges is to understand the workflow,” Kunisch explained. “This one is extremely important for all stakeholders. You truly need to understand how the data is captured in the workflow, and to do that you need to understand the entire pathway of a patient as they move through each setting where care is provided.
“For example, during admission you might ask did they come in by ambulance or walk-in, what is the triaging process? For the inpatients, did they transfer in from the ER versus a direct admission, and what staff members were involved in the patient care?”
Then, looking at each department workflow, in the VTE example, the radiology department had a huge impact on capturing that eCQM. And then most important, discharge: What are the steps to discharge, that is when all the documentation is completed and that is one’s only chance to capture all the data needed to calculate the eCQM correctly.
Within the clinician’s workflow
“Another method is to design ways to capture the data in a clinician’s workflow,” Kunisch suggested. “In a different eCQM we worked on, we had to capture the date and time that a patient who had a stroke was last known to be without those symptoms. This time element is critical to determine how a patient was treated. In the current workflow, it was always captured in a text format, so we could not use it for the eCQM.”
But staff discovered that ER physicians consistently used an order set when they suspected a stroke. So what staff did was create a simple form that would pop up and give physicians the ability to enter the last date and time the patient was not showing stroke symptoms, so it was right there in their workflow, and they could capture it and move on with the rest of their care.
On a final note, despite all the work on eCQMs over the past 10 years, there remain some significant challenges. There are some current initiatives that could help. These include FHIR, which is the new computer language that much of the quality measures will be based on; advances in natural language processing, which is an engine that connects and reads text and transfers it into discrete data elements; and true interoperability, which all healthcare organizations are trying to achieve.
“It is imperative that the future success of eCQMs reporting is reliant on the participation of all key stakeholders,” Kunisch concluded. “You should get involved in any way you can and that way we can all achieve that success.”
Experience the education, innovation and collaboration of the HIMSS Global Health Conference & Exhibition… virtually.