Is the future of AI salutogenic or pathogenic?
While the question smacks a bit of a TV quiz show, it’s actually a pithy way of asking a much bigger question. Namely, whether the new options AI can bring to healthcare are better off being used to cure disease or prevent disease in the first place.
To a significant extent, the answer is likely to be “Both,” but in a recent commentary, author and healthcare CEO Pete Trainor puts the emphasis on the “salutogenic” option. Coined in 1979 by medical sociologist Aron Antonovsky, the word ‘salutogenesis,’ says Trainor, “comes from the Latin word salus, meaning health, and the Greek word genesis meaning origin. As a medical approach, it focuses on factors that support human health and well-being, rather than on factors that cause disease (pathogenesis).”
In his view, since “we find ourselves in a world where people are generating so much data about themselves, it’s time healthcare companies stopped focusing on the treatment of conditions and started looking at our data to help us make functional adjustments to our lifestyle to stop illness, rather than treat it.”
To that end, Trainor points to three possible use cases for AI in “salutogenesis.”
First, we can use it to get control of our lives, so to speak. Noting that “we often sacrifice our long-term health for short-term gains” by “working long hours, eating sporadically, and exercising at irregular intervals,” Trainor says we can and should use AI systems, such as those embedded in wearables or smart phones, to “analyze your historical decision-making data to automate the tips and tricks that might nudge you to help prioritize cause over an eventual condition.”
In other words, to help us do the right thing when it comes to taking care of ourselves.
Similarly, Trainor argues, AI algorithms can be used to help us sift through the array of healthcare products and services available on the market, enabling use to take advantage of “personalized life-style adjustments (that) can be beneficial for any one-to-one long-term health strategy.”
Finally there’s mental health, in Trainor’s view, we tend to choose our paths based on a series of decisions that often occur at certain life moments. Indeed, he argues life “itself can be mapped out as a series of Goals, Standards, and Preferences (GSP) . . . (and) if we help people choose and set goals, or focus on near-term events that they would like to achieve, and use intelligent-decision-nudging in order to drive a behavior, we have a genuine possibility of utilizing AI to evaluate GSP and assess the utility of any potential action a person could take.”
In short, “technology has the potential to guide people through the outcomes in their decision making that could create complicated long-term mental health challenges.”
For Trainor, the bottom line is that “AI technology in healthcare has been so focused on analysing data, images and historical cases to support doctors’ treatment of conditions, we may have missed the most significant opportunity; helping future patients themselves with a sense of coherence, and understanding of the cause of many negative health outcomes.”