Complex healthcare

Ethical considerations regarding AI use in healthcare: How much is too much?

AI has become ubiquitous in healthcare. Curtiss Johnson, DO, discusses the pros and cons of AI use in the medical setting and how it may evolve going forward.

Topics

The 21st-century evolution of computer science invites certain ethical questions such as “can properly trained artificial intelligence (AI) actually pass the Bar exam?” (yes, it can) or “could a military supercomputer named Joshua accidently start a global thermonuclear war?” (such as in the 1983 cult classic film “Wargames”). Fast forward to today, and AI and large language models (LLM) have gone mainstream. From industry giants like Amazon and Google to medical students using ChatGPT for creating flashcards, practice questions and SOAP notes, AI is now part of everyday life, reshaping how we work, learn and interact.

Given the source material on which these models have been trained and the ever-increasing financing and software development capabilities, AI has become ubiquitous in many industries, including healthcare. In 2023, the electronic medical record (EMR) company Epic announced it would begin piloting an integrated, generative AI select platform within the software. Since then, hospitals worldwide have adopted the platform for various tasks such as writing patient summaries or real-time dictation services, daily job duties that are often tedious and time-consuming for busy clinicians. It is therefore easy to see the appeal of large-scale adoption of AI models in healthcare systems. This adoption has the potential to boost productivity, minimize errors and improve patient outcomes.

But given the exponential growth of these models and their near human-like levels of intelligence and reasoning, they present several logistical and ethical concerns. Much like Joshua the aforementioned supercomputer postulates—is the only winning move not to play?

Benefits of AI and LLMs in complex healthcare systems

“The ethical and appropriate use of AI in the medical field is a pertinent and timely discussion,” said Casey Schukow, DO, a resident physician at Corewell Health William Beaumont University Hospital. “As a physician who uses AI for academic writing and patient care purposes, I believe its emergence is an unfortunate necessity, given the current healthcare climate. The U.S. physician shortage has increased patient-to-physician ratios, overwhelming an already burdened workforce. This is due, in part, to declining reimbursement rates, which force physicians to see more patients, perform more procedures and work longer hours with fewer resources.

“The corporatization of healthcare further complicates this by limiting competition and mandating productivity and compensation. Outpatient settings often result in rushed 15-minute appointments, barely enough time for proper examinations and documentation, excluding inbox and administrative tasks.”

The risk of error

Besides the ability to relegate boredom-inducing tasks such as discharge summaries and progress notes to a much faster computer companion, AI has the potential to transform how patient care is practiced and studied. As life expectancy increases, today’s patients are more complex; many have multiple chronic conditions and medication regimens that can rival the length of a CVS receipt. This level of complexity, amid the physician shortage and high case volumes, poses a risk for human error. Medical errors are a recognized source of significant patient harm in the United States. Although it is difficult to accurately measure this burden, estimates from observational studies indicate that preventable adverse events in hospital settings may contribute to up to 210,000 deaths each year.

“As a resident pathologist, the increasing complexity in surgical pathology poses similar challenges,” said Dr. Schukow. “Evolving diagnostic categories are difficult to keep up with, and the expectation to do more with less is pervasive. Physician retention suffers as a result. Medicine, like any other profession, needs sustainable conditions.”

Faced with these realities of modern healthcare, AI has entered the chat (literally). As previously mentioned, some EMRs are equipped with predictive AI capabilities, opening the door for new safety protocols and algorithmic approaches to complicated disease states. For example, a 2024 JAMA study suggested that a properly trained and supervised AI model within an EMR can help predict and detect sepsis, pressure ulcers and medication errors, among much more. Over time, these models can and likely will be improved upon, creating a potential boon in other healthcare arenas like public health, epidemiology and infectious disease prevention.

“AI can offer solutions,” Dr. Schukow continues. “Voice-to-text devices can transcribe patient conversations and HIPAA-compliant chatbots can assist with inbox management. Chatbots can also help create templates for academic work, but plagiarism awareness is crucial. Original content should always be prioritized before using AI tools for refinement. AI tools are already enhancing pathologists’ workflows and aiding in cancer detection. Used responsibly, it can be a valuable asset. By using AI for documentation, refining ideas and streamlining processes, we can adapt to modern medicine’s demands.”

Obviously an AI model is no replacement for human interaction in medicine, but this raises the question: how much can (and should) we remove humans from the equation?

Workforce and privacy concerns

In pathology, there is increasing fearmongering about AI replacing jobs, a sentiment that has been echoed in other fields like radiology. While this likely will not happen in our lifetimes, the point remains valid. The image analysis and generative AI abilities of modern software have grown so much that it can now be used as an FDA-approved prostate cancer screening method for pathologists. While this likely falls into the previously mentioned category of “task relegation” rather than true replacement-worthy levels of intelligence, the future workforce must advocate for the responsible use of AI in their daily workflow and continually emphasize their unique skill sets and intuition that make them irreplaceable.

While the extension of human intelligence offered by AI and LLMs certainly is promising, this raises privacy concerns for the patient data that AI models touch and the institutions who house them. This is certainly not the first (nor the last) time I will mention this, but AI and LLMs are, in fact, not human. In earlier iterations of ChatGPT, one could prompt it to offer instructions for making napalm out of household items or how to successfully commit suicide. Thankfully, more recent updates have implemented filters that help prevent it from responding to such prompts, but this remains a stark demonstration of how AI models are programmed; they “live” to serve the will of humans who command them, but have trouble conceptualizing their health or well-being.

How, then, can we trust AI to safeguard our protected health information (PHI)? For starters, institutions must honor basic healthcare ethics principles such as patient autonomy and informed consent, and ensure proper encryption and anonymization of sensitive PHI. This, coupled with well-written laws and regulations, is the bare minimum for protecting patients in a new era of healthcare information technology.

Even the best-laid plans, however, are not always safe from both human and nonhuman bad actors. Another study from 2018, for instance, showed that a machine learning algorithm was able to successfully re-identify over 85% of adults and nearly 70% of children from a data breach in which patient data was anonymized and PHI was removed. Considering the capabilities of AI and LLMs for better and for worse motives, while adequate for now, more research and legislation are likely needed moving forward to safeguard patients’ PHI entrusted to healthcare organizations and the AI models they use.

Moving forward

AI and LLMs have the potential to revolutionize medical practice with algorithmic approaches to EMRs, enhanced productivity tools and improved data analysis capabilities that can promote more efficient and individualized patient care. However, barriers such as workforce management and privacy concerns may limit its widespread adoption. While many of the pros and cons raised are purely speculative at this point, one truth is for certain: AI in healthcare has arrived, and it is here to stay.

“The medical field is rapidly changing,” said Dr. Schukow. “AI allows us to keep up and remain professionally viable. Refusal to engage with AI may, ironically, increase the risk of physician displacement. It’s not that AI will replace us, but those who fail to adapt may be left behind.”

It is difficult to accept a reality where the hallowed halls of medical schools and the most prestigious hospitals in the world have become infiltrated by an alien-like entity that can converse, reason and “think” like the expert clinicians inside them. But to bury our heads in the sand while AI and LLMs grow exponentially around us would be foolish. As healthcare professionals, we are tasked with deciding how to adapt our models of care and augment our intelligence to give AI a seat at the table in this brave new healthcare world.

Most importantly, we must remember that no matter how advanced AI becomes, it will never replace the healing touch or warm words of a trusted healthcare professional; those truly human qualities which contribute so meaningfully to patient care. Healthcare is always evolving, and AI is evolving right alongside it, but in the end, the best decisions we can make are the ones that are best for our patients.

Editor’s note: The views expressed in this article are the author’s own and do not necessarily represent the views of The DO or the AOA.

Related reading:

The pros of artificial intelligence in health care

Artificial intelligence: Why it doesn’t belong in medicine

Leave a comment Please see our comment policy