Big data and public health, part 2: Reducing unwarranted services
Spending on unwarranted use of healthcare services, where no actual measurable benefit is obtained, has been estimated in the range of $250-$325 billion annually in the U.S., according to Thompson Reuters data from 2009. The unwarranted use of healthcare services is the largest single component to the $600-$850 billion surplus in healthcare spending that can be attributed to embedded inefficiencies; inefficiencies that ultimately increase healthcare cost and decrease the overall quality of public health.
The first step that agencies can take to reduce healthcare costs and improve quality is to identify the surplus of discretionary health care services. This is where analysis of patient and population data comes in.
Enter Big Data
Hospital organizations are sitting on large volume data sets, typically in the petabyte scale for each hospital. These are composed of individual patient electronic information that has the potential to root out these systemic inefficiencies in their healthcare services. Sorting through this data, however, presents a substantial challenge.
Even within a single patient record there are large varieties in the type and format of the data. Today, this data is becoming more complex with structured data (data that resides in fixed fields within a record) becoming comingled with unstructured data in the form of free text, images, audio, and video files.
During periods of critical patient care this data can have a high velocity requiring quick time-sensitive response. Hospitals are increasingly facing information overload and need to implement data strategies — ranging from data use to data retention — to uncover the information buried within these large volumes of population data. Collecting an overall dataset with the individual case details for each member of the entire population of patients can help identify inefficiencies in healthcare services.
Reducing Unwarranted Healthcare Expenditures
Specific areas of unwarranted health services include: overuse due to fee-for-service incentives; marginally valued direct care that has no measurable benefit or shows no improvement in patient outcome; unnecessary diagnostic or imaging tests that are performed to protect against malpractice exposure; and high cost diagnostics performed on patients at very low risk for the condition.
To systematically reduce these un-warranted expenditures, healthcare organizations are moving away from the current fee-for-service payment model and towards providing reimbursement for services based on health outcome. The new Accountable Care Organizations (ACOs) are working to provide pay-for-performance incentive models. An ACO is a payment and delivery reform model that ties provider healthcare reimbursements to quality measures and works to reduce the total cost of care for a population of patients. Predictive and prescriptive analytics is inherently embedded into this new model of care.
Predictive and prescriptive analytics looks at what might happen for a given health situation and prescriptive analytics tells either hospital or patients what they might want to do in the future to address a specific health situation. These analytic approaches are powerful tools for identifying un-warranted health services.
[Related: The HIT needs of ACOs: Analytic data.]
The analytic approach uses patterns found in historical data sets like medical records to identify risks, trends, and associations. One well-known example is credit scoring used throughout the financial services industry. Particular to healthcare, predictive analytics can be used to address un-warranted care by answering questions such as:
• What is a patient’s specific risk for readmission to a hospital over the next 30 days?
• What is the specific outcome a diagnostic test will likely have on the current treatment plan?
• What specific medical procedures, tests and prescribed drugs provide no measurable benefit in patient outcome?
• What specific tests are performed primarily for medical liability reasons?
Each of these questions can use large population datasets to predict a specific risk or outcome based on the individual’s specific history and the past behavior of the population. This is similar to how Amazon will make a book recommendation for you based on both your past purchases and the patterns of their entire population of book buyers. Predictive analytics will say what you are likely to buy and, what’s more, accurately inform you about your buying decisions.
Predictive and prescriptive analysis in government health
Large government healthcare organizations like the Department of Veterans Affairs (VA) and the Military Health System (MHS) have the opportunity to use their unique patient data sets of about 18 million cared-for lives to draw new insights based on the clinical, operational and (where connected) financial data elements. These data, coupled with analytical models, support data driven decisions for care delivery models.
Similar-sized statistically significant data sets are now being used in the private sector to match patient profiles with population data to monitor disease progression and medical outcomes. The commercial data sets now available are patient de-identified assuring privacy protection and HIPAA compliance. The VA and MHS should consider additional ways to utilize their big data information to improve quality and medical outcome while reducing the overall cost of care.
Medicare and Medicaid cover more than 90 million lives. Analysis of the Medicare Current Beneficiary Survey data from the Center for Medicare Services (CMS) shows that regional variations in healthcare spending are the largest single contributor to healthcare costs that have no discernible improvement in patient outcome, according to a 2009 report in the New England Journal of Medicine, Getting Past Denial: The high cost of healthcare in the United States. This means that we have certain areas in the U.S. that are much better at delivering healthcare with the same outcome for lower cost.
The big data challenge will be to systematically identify these performance differences and root out the unwarranted services. One example is the practice of ordering dual high-cost CT and MRI imaging tests together. While each test has its merits and together they might provide a more complete picture of the medical problem, often the critical clinical assessment can be made with just a single test. The use of population data tailored for individual patients can help sort out when multiple tests are unwarranted.
The Agency for Healthcare Research and Quality (AHRQ) is a small agency within Health and Human Services that supports research that helps make more informed decisions and improves the quality of health care services. In 2010, AHRQ conducted the largest federal investment connecting medical liability to quality by funding states efforts to implement and evaluate patient safety approaches and medical liability reform. This population-based research is most effective when the researchers are able to access and review the huge patient record databases and registries for the comparative effectiveness studies. The AHRQ mission is directly focused on making use of information in the collective datasets of patient populations and could largely benefit from scientific computing and big data management support and best practices.
Multiple agencies across the government need to use the tools of predictive analytics and ‘big data-sized’ large population health datasets to identify unwarranted healthcare services. Implementing analysis that supports the systematic review of fast stores of de-identified patient electronic records will improve the quality of patient care and decrease the overall cost of healthcare services.
The next article in this series will address how big data can be used to combat waste, fraud and abuse in healthcare.
Roger Foster is a Senior Director at DRC’s High Performance Technologies Group and advisory board member of the Technology Management program at George Mason University. He has over 20 years of leadership experience in strategy, technology management and operations support for government agencies and commercial businesses. He has worked big data problems for scientific computing in fields ranging from large astrophysical data sets to health information technology. He has a master’s degree in Management of Technology from the Massachusetts Institute of Technology and a doctorate in Astronomy from the University of California, Berkeley. He can be reached at firstname.lastname@example.org, and followed on Twitter at @foster_roger.