Implementation best practices: Kicking off quality and safety tech
There are quality and safety technologies that help ensure the best patient care or the best care environment, and there are quality and safety technologies that help ensure health IT is working at peak performance. All are important to healthcare provider organizations.
Here, five experts in these technologies offer their perspectives, best practices and tips for implementing the tools in a way that will enable health systems to gain their maximum benefits.
Electronic quality measures
When implementing quality technology such as the kind that ingests electronic quality measures, there are two key considerations, said Fred Rahmanian, chief analytics and technology officer at Geneia, a vendor of quality and safety, population health, remote patient monitoring, and analytics healthcare technologies.
“First, designing and implementing the technology in a way that supports physicians and their existing workflow,” he stated. “Much has been written about the epidemic of physician burnout, and we know a significant contributor is technology built and installed without the end user in mind.”
"With a prospective quality measure process, organizations embed the processes’ clinical steps in their workflow to ensure patient care adheres to the care standards."
Fred Rahmanian, Geneia
Second, he advised, is that prospective quality measure calculation serve provider organizations and their patients.
“With a prospective quality measure process, organizations embed the processes’ clinical steps in their workflow to ensure patient care adheres to the care standards,” he explained. “This allows them to determine quality measure needs as soon as there is patient contact and a preliminary diagnosis, which in turn improves quality outcomes and results and lessens the burden on physicians, care teams and their staffs.”
Evidence-based, regulations-aligned method
Healthcare CIOs implementing quality and safety technology should use real-time electronic health record data and an evidence-based, regulations-aligned method, said Drew Ladner, chairman and CEO of Pascal Metrics, a quality and safety IT vendor that monitors EHR data near real time to track adverse treatment events and provide solutions to advance the quality of care.
“The 2011 Institute of Medicine – now the National Academy of Medicine – report on Health IT and Patient Safety called on the field to use EHR data to identify safety vulnerabilities,” he advised. “Shortly thereafter, EHR utilization took off, and in the intervening half decade or so, penetration has increased from just 8% to an impressive 96%.”
"The 2011 Institute of Medicine – now the National Academy of Medicine – report on Health IT and Patient Safety called on the field to use EHR data to identify safety vulnerabilities."
Drew Ladner, Pascal Metrics
However, most hospitals and health systems still rely on the “see something, say something” method formally known as voluntary event reporting, and are only able to identify approximately 5% of patient safety events, perhaps 10% if they’re looking at claims data, Ladner added.
“For many reasons – clinical, financial, regulatory, legal and experience – relying on this method indefinitely is unsustainable and, as patients grasp what hospitals are doing in this day and age, likely will lead to incredulity and frustration across patients and their families,” he said.
The first step toward real-time patient safety
There is no need to start from scratch with research and development, which is costly in terms of dollars and, more important, the amount of injury and death that will result while “reinventing the wheel,” he said.
Dr. Don Berwick, the widely acknowledged founder of the modern patient safety and quality movement, has already called on boards and “all hospitals” to use the automated Global Trigger Tool as the “first step” toward real-time patient safety, he explained.
“Now, the imperative is moving quickly beyond the foundational automated Global Trigger Tool to identifying and reducing harm with comprehensive safety and quality platforms that use real-time EHR data,” Ladner said. “As the old Peter Drucker dictum goes, you cannot manage what you do not measure, and high-performance management uses what happened, what is happening and what may happen to answer the question, ‘What do I do right now?’”
In addition to being evidence-based, this new method must also be aligned with regulation, otherwise the investment may result in a costly exercise in unsustainability, he added.
“CMS has announced that significant patient harm development and testing using an EHR-based hospital harm measure has already been done, and it’s likely that these publicly disclosed efforts will result in a new EHR-based measure that replaces the almost universally critiqued claims-based, reimbursement-related safety measure: the PSI-90,” he said.
“Also, in June 2019, the NQF Patient Safety Committee approved the first two health IT-based patient safety measures in the overall all-cause harm agenda, namely for hypoglycemia and pressure injury.”
Clinically validated adverse event outcomes data
Ladner continued with another best practice: Healthcare CIOs should generate EHR-based, clinically validated adverse event outcomes data at adverse event category and sub-category levels.
“Today, claiming that advanced analytics using artificial intelligence will generate significant value in healthcare is almost a cliché,” he said. “The problem is, most research efforts to date, whether peer-reviewed and published in top-tier medical journals or privately sponsored, rely on mortality and morbidity data to train safety prediction models.”
If healthcare wants to predict medication-related hypoglycemia with high accuracy, wouldn’t it be more effective to train a predictive model using fine-grained adverse event outcomes data validating that the patient suffered from medication-related hypoglycemia, versus relying on coarse-grained data that the patient died, Ladner asked.
“Certainly, yes, and that’s exactly why another best practice is to implement a system that generates EHR-based, clinically validated adverse event outcomes data at adverse event category and sub-category levels,” he explained. “First, this provides a gold standard way of measuring patient safety that provides more accurate, timely and actionable analytics than what is available with voluntary event reporting or claims data.”
Leading provider organizations already are being enabled to do this, using real-time EHR data to measure and manage adverse event outcomes – in a scientifically valid and clinically credible way – and monitoring how safety across a unit, hospital or system is progressing, he said.
A valuable lens into quality
“And adverse event outcomes data provides a valuable lens into quality,” he added. “Most health systems do not regularly evaluate policies, procedures, protocols, order sets and the like for how safe they are using an efficient and effective method on a regular basis. And even if they do, it’s likely not comprehensive, systematic, continuous and efficient.”
With a system generating adverse event outcomes at scale, healthcare organizations are able to identify how safety problems are impacting quality, learning about care that other approaches do not afford, he said.
“Additionally, using adverse event outcomes data, providers are able to identify patterns of harm or quality variation not only across 5-10% of patient events – what voluntary event reporting and claims data have been known to capture – but across 90-95% of patient safety events,” he said.
“Today, a healthcare organization using event reporting and/or claims data is performing root cause analyses on a small set of patient safety event data, meaning they are missing many events and many patterns of harm, and have a far poorer understanding of how their care is being delivered," he added.
Most healthcare organizations do not have EHR-based adverse event outcomes data today, but now is the time to make the switch, Ladner advised.
“Increasingly, health systems are realizing that by doing so, they can secure gold standard actionable measurement, know what the vast majority of patient safety events are, know how safety problems are affecting quality, and have the essential raw material for making critical patient safety and quality improvements,” he explained.
Using predictive analytics
Rahmanian added another best practice when implementing quality and safety technologies: Predictive analytics now allow hospitals and provider organizations to identify those most likely to be readmitted at the time of an initial hospital admission.
“This means triggers can be automatically added to the provider workflow such as asking the patient before discharge if they have pain, if they have made a follow-up appointment with their doctor, and if they will be able to fill their prescriptions upon discharge,” he said.
Implementing analytics in a way that supports earlier interventions and does so within the existing care team workflow supports provider uptake and leads to better cost and quality outcomes, he added.
Testing EHRs to ensure quality
On another healthcare quality and safety front, testing based on real-life scenarios tailored to the organization provides critical feedback during an EHR implementation, said Jeremy G. Trabucco, executive vice president, clinical solutions, at Harris Healthcare, a quality and safety health IT vendor.
“Testing must be designed to cover every entity of the patient journey from the emergency department through surgeries, therapy and discharge with every item in-between,” he advised. “By testing patient journeys known to the organization, the implementation can eliminate surprises that were untested and will maintain project costs with minimized rework.”
Quality when implementing a FHIR API
Healthcare CIOs also need to think about quality and safety considerations when implementing a FHIR API, said Raychelle Fernandez, vice president of development at Dynamic Health IT, which focuses on the quality and interoperability of 2015 Edition Health IT Certification Criteria and integrates with any EHR.
“HL7 has provided a checklist to assist implementers as they consider the impact of the FHIR API on their systems,” she said. “This list is a good starting point when reviewing environmental safety needs for this implementation.”
"The CIO should consider cost of integration, cost to run API, support for the API, quality of code, testing coverage."
Raychelle Fernandez, Dynamic Health IT
Fernandez offers some additional suggestions for specific items to review.
“Ensure the FHIR API is at a minimum 2015 Edition ONC Certified on §170.315(g)(7), §170.315(g)(8) and §170.315(g)(9),” she recommended. “Keeping in mind that HL7 FHIR recommends FHIR Release 4, which is the first normative version. Ask to review the ONC test tool Inferno result. Request a test deck and provide your own test deck for review of the API functionality to ensure the API implementation fosters interoperability.”
Review the trusted connection and have a firm understanding of the security testing the API has undergone before implementing, she added. Release the API to the team that thinks like the ‘bad guys’ do, she said. They, she indicated, will ensure the API passes the top 10 list of the most seen application vulnerabilities:
- Broken authentication.
- Sensitive data exposure.
- XML External Entities (XXE).
- Broken access control.
- Security misconfigurations.
- Cross Site Scripting (XSS).
- Insecure deserialization.
- Using components with known vulnerabilities.
- Insufficient logging and monitoring.
There are other considerations, Fernandez advised, related to quality, reliability and cost.
“The CIO should consider cost of integration, cost to run API, support for the API, quality of code, testing coverage,” she said. “Will the API be hosted? And if so, where? What are the load balancing and backup requirements? What will be the overall usage of the API? Does it create a reduction in tech debt? Does it increase revenue, and if so, over what time period? What resources are needed for the API and will any of the support need to be outsourced?”
The role of RTLS in quality
Adam Peck, vice president of marketing at CenTrak, a vendor of real-time location systems that aim to improve safety in hospitals, said that healthcare organizations are increasingly evaluating locating and sensing services, often referred to as real-time locating systems, to improve operational efficiency and the quality of patient care.
“When selecting an RTLS solution, it is important to look for a platform that can meet immediate needs while scaling to future enterprise demands,” he advised. “Evaluation of the various RTLS technologies used and how granular the location data that technology can produce is imperative. When looking for a solution to find information such as ‘Is this piece of equipment on this floor,’ an estimation-based RTLS technology – such as Wi-Fi or BLE alone – would be sufficient.”
"Evaluation of the various RTLS technologies used and how granular the location data that technology can produce is imperative."
Adam Peck, CenTrak
However, if the healthcare organization is looking to automate equipment PAR-levels and document workflow for EHR data entry or other processes, an RTLS platform that provides certainty-based location data – such as second generation infrared or advanced ultrasound – are needed, he contended. RTLS technology is not a one size fits all, and what may work in some departments for some use-cases, may not work in others, he said. Choosing a platform that can leverage multiple technologies and customize the system to fit your needs is ideal, he added.
“Additionally, the interoperability and scalability of the platform should allow healthcare organizations to leverage the technologies and systems already in place as well as new applications that may be required in the future,” he said. “Actionable location data, produced by an RTLS platform, can drive process improvement via integrations with traditional HIS vendors and niche RTLS players alike – making the entire enterprise ‘smarter.’”