Automated care: thermostat of health or Ponzi scheme?
One of the concepts occupying my mind is that of automated care. The last time I wrote about automated care was February of 2011 (Emotional Automation, Revisited). Lately I've been thinking about it more and more. The burden of chronic illness continues to rise and the size of the provider workforce is not keeping up. This manifests as overworked, unhappy providers, particularly in primary care.
Sooner or later we'll need to come to the conclusion that some of the care processes must be automated. Other industries have done this and whether it is pumping your own gas or checking yourself in at the airport, we seem to like it just fine. In fact the airline industry plans to take it to a new level. But in healthcare, we don't just employ one person where other industries have automated; we employ three or four to do redundant work. We have a long way to go.
However, the idea that a software agent, or a robot might take on various aspects of your care tends to 'creep out' both patients and providers. Maybe it's because they fear that robots will run amuck.
Yet, folks like my friend Tim Bickmore have capably shown that in some instances, patients actually prefer a software agent to a person. There are several examples, in addition to Tim's work, that show how software agents can indeed be caring. Buddy, a virtual companion from the new company Geri Joy is one such example. It's a cuddly representation of a dog that appears on a tablet, but responds to voice and to touch in the way a real pet would.
Effective automated care includes feedback loops and emotional responses. Feedback loops typically involve some measured parameter. The idea behind automated care is to send effective, caring messages to an individual based on the feedback loop.
A great example of a feedback loop we take for granted is the thermostat. We're all better off because of internal climate control whether it be in a New England winter or the summer in Abu Dhabi. Thermostats work well and fade into the background. We don't even realize they are working most of the time.
So it would be in the case of an effective automated care structure. Some signal would come in from the patient's remote monitoring device(s) and a caring response would go out to the individual. When that process works well, the individual will be comforted and pleased, as in the case of Tim Bickmore's relational agents or Geri Joy's Buddy. If, however, the feedback loop is not crisp, it would be more like robots gone amuck. Or like a Ponzi scheme, where there is a positive feedback loop with no control and eventual implosion of the system.
In a Ponzi scheme, 'A' produces more of 'B,' which in turn produces more of 'A,' a classic example of a positive feedback system. The 'A' is profit and the 'B' is new investors. Profits are channeled back to new investors, which, left unchecked can lead to rapid growth towards collapse.
When we use information from the past to influence behavior in the future, that is feedback. A heart failure patient eats pizza and gains three pounds, putting him or her at risk for a visit to the emergency room. Cause and effect. But, when we share that data with the patient, he or she will be more aware of the cause and effect, and will likely avoid eating pizza in the future.
All of you are leaders in creating the new generation of healthcare. It is well recognized that we must act boldly. We have the capability of building systems that are more like Buddy or Tim's nursing agent. We must avoid building systems that are like the cleverbot example.
It's in our hands. Are you up to the challenge?
Joseph C. Kvedar, MD, is the founder and director of the Center for Connected Health, part of Partners HealthCare in Boston.