Is Empathy Truly the Clinician's Last Stand?
As we witness the advancement of AI, the question arises as to whether empathetic communication can only be mastered by the human clinician - and whom the patient of the future would choose
The destiny of the doctor in an AI world often sounds preordained. It typically unfolds like this: initially, we automate administrative tasks, bolstering the efficiency of care providers and allowing them to focus on clinical decision making. Then, we enhance the clinician's decision-making prowess, positioning AI as a collaborator, helping improve care quality and decisions powered by enormous data sets. Ultimately (and perhaps most controversially to some) AI's scope broadens to encompass complex clinical decision-making, incrementally diminishing the clinician's direct involvement in diagnostic and therapeutic “thinking”.
This evolution seemingly relegates the healthcare professional to a role predominantly characterized by human-to-human communication and empathetic engagement -the purported last bastion of humanity in healthcare. However, have we considered that when given an option, perhaps people may not choose to have a human involved at all?
Ken Liu's thought-provoking short story, "Good Stories," challenged my perception of this trajectory, questioning the sanctity of the idea that a future civilization will always prioritize human arts and communication. Set in a not-so-distant future, the narrative follows Clara, a writer navigating a world where AI dominates fictional writing, leaving humans to minimally alter texts (1 word out of every 50, to be exact), thereby allowing publishers to claim "human-crafted" copyright. Fighting for what she perceives as the want of the public (“People don’t like the idea of consuming art made by a machine,”), she advocates for more creative liberty and the preservation of human writing. However, she is confronted with a reality in which people are indifferent to human vs. AI authors. The story probes the relevance of the intrinsic value of human content in an era dominated by algorithmically generated summaries and visuals. Times change societal values.
(Video generated by OpenAI’s Sora text-to-video model)
The convergence of technological advancements today all point to a direction in which we will have convincingly human-like digital avatars powered by increasingly sophisticated AI. We will voluntarily (and at times, involuntarily) interact with these, for entertainment purposes or for other specific reasons (e.g. for example, interacting with customer support). Pioneering companies like Soul Machines and ReimagineAI are among those at the forefront, crafting digital beings capable of mimicking human empathy and emotional intelligence. Recent updates to large language models (such as OpenAI’s Sora) demonstrate how easily life-like video can be produced. It’s hard not to extrapolate this to healthcare delivery.
Indeed, the collaborative model of human clinician + AI in care delivery will likely dominate complex clinical scenarios and acute care for the foreseeable future. Similarly, surgical specialities may take their own innovation trajectory (though intake consultation and post-procedural follow-up could be targets here). It’s however in areas such as primary care and most outpatient specialities (particularly those that have lent themselves well to virtual delivery) where we may soon witness these digital entities functioning with increasing levels of autonomy previously unimaginable. Can they, however, truly be as empathic as a human being?
A case for artificial empathy
The narrative around AI in healthcare often hinges on efficiency and precision, yet the nuanced capacity for empathy has always been felt to be a distinctly human trait (for the most part). However, as we edge closer to digital avatars that can mimic the subtleties of human emotion, the question arises: can AI truly understand and replicate the depth of human empathy? The potential for AI to analyze and interpret vast datasets could enable it to provide highly personalized empathy, understanding a patient's history, cultural background, and emotional state more comprehensively than a human clinician might. This shift could redefine empathy in care, making it not just about human connection but about the depth of understanding and personalization that AI can bring to each patient interaction.
The inherent variability of human nature also presents a challenge to delivering consistent care. Clinicians, subject to the universal human conditions of mood swings, biases, and fatigue, may find the quality of their empathy wavering, inadvertently impacting patient care. In contrast, AI could promise an unwavering consistency in empathetic responses, free from these human vulnerabilities, which could be particularly transformative in managing chronic conditions that demand stable, long-term interactions.
Ethical and philosophical drawbacks
The core challenge lies not just in AI's ability to offer empathy, but in the authenticity of its empathy as perceived by patients, a cornerstone of the therapeutic relationship crucial for impactful care.
This raises deep ethical and philosophical questions about AI's role in empathy, its authenticity, and the implications for patient autonomy and the broader societal view on replacing human interactions with machines. These considerations are vital, touching the essence of human care and the integrity of the therapeutic bond, questioning whether the source of empathy is as crucial as the genuine feeling of being understood.
There is then the question of patient and system preference. If all things were equal (access, quality, cost, experience), would you care if you received your primary care from an AI family doctor, or would you always select a human one? Now how about if all things were not equal - let’s say the AI family doc was available in real time 24/7, while the human doctor could see you in 3 days? Or that it cost you (or your healthcare system) 1/20th the cost with the AI option? Or taking it even further, what if you felt more comfortable with the AI family doctor in discussing sensitive health concerns and found them to be consistent with their communication and always spot on with empathy?
Conversely, for today's physicians, would you go into medical school if you were told than 90% of your work will be in communicating a treatment plan that was developed by a computer? Sure, it can be argued that I use hyperbole in these questions (which itself is debatable), but the point is that these are difficult, uncomfortable questions that we should be asking ourselves today.
Ultimately, my goal here is not to imply that all aspects of healthcare delivery, including empathy and communication, should be left to technology. Rather, I’m simply challenging the dogma that excellence in empathetic communication can solely be achieved by a human clinician or healthcare team member. In an effort to avoid being blindsided by the maturation of a disruptive technology, and in the worst case be made obsolete, I advocate that we have a thorough dissection of the role of all healthcare team members today, including physicians. By undertaking this approach, we position ourselves to more effectively navigate the evolving landscape, prioritizing patient interests above all and maintaining integrity in recognizing the true beneficiaries of our service - the patient.