The rise of AI

The pros of artificial intelligence in health care

David O. Shumway, DO, discusses the positive elements of AI and how he looks forward to using it in the future.

Topics
Editor’s note: Read this article’s companion piece, which opposes the widespread use of AI in medicine, here.

As a resident physician, I’ve witnessed the challenges of staying abreast of the ever-expanding medical knowledge base while trying to master the fundamentals. From new treatments to emerging diseases, to finding enough time to eat and sleep during training, it’s hard to keep up with everything.

That’s why I’m enthusiastic about the prospects of large language model artificial intelligence (i.e., Bard, ChatGPT) for medicine, both as a resident and in the future when I get out into practice. Large language models (LLMs) are artificial intelligence (AI) tools that have been trained using vast amounts of data to comprehend and produce human-like language. This makes them incredibly valuable in clinical practice, and a hot topic being discussed at every level, from the workroom all the way up to the pages of JAMA.

In my opinion, the LLMs that are currently available are very, very good at two things: composing and communication. The first experience you will have working with them will be like a search engine that you can have a dialogue with. What makes AI different is that even the most accessible models (as of this writing, the majority of LLMs are still free) are trainable and can be taught to complete tasks. As a proof of concept, I had ChatGPT analyze several of my other articles online and attempt to copy my writing style. In fact, the first draft of this article was written by ChatGPT, and I only needed to make some small edits and fact-check it (more on that later). I love writing, but this was a huge time-saver!

Here are just a few other potential applications to medicine that make me really excited about AI:

Documentation and paperwork reduction

Poll any physician about what their favorite and least favorite parts of the job are, and I guarantee you will get a very similar response: We love patient care, but we hate the mountains of documentation and paperwork associated with it. This is the first and most obvious application of AI.

As a medical scribe, LLM AI has shown a remarkable ability to produce large amounts of text accurately and quickly, streamlining the process of documenting patient encounters. This can save time and improve the accuracy and completeness of medical records.

Importantly, AI is trainable to mimic your own writing style. While using an LLM, you can feed examples of your writing to the AI so it can analyze and learn from them. It doesn’t stop at clinical notes either. Physicians have already reported using LLMs to reduce the documentation burden of practice in myriad ways, using AI to write pre-authorizations, grant applications, clinic letters, patient handouts and materials. 

Here is a way that I am using LLM AI in my documentation today:

I will take some brief notes during my history and exam, then dictate using voice recognition as many details as I think are important about the patient. Then I will dictate the patient’s problem and my basic plan. The LLM I’m using right now, OpenAI’s ChatGPT (on GPT-4), has been trained on samples of my progress notes to sound like me. Like a scribe, medical student, or intern, ChatGPT will then document my jotted ramblings as a structured, polished SOAP note and give it back to me to review.

The technology to take this further already exists. While I was attending an internal medicine conference in San Diego recently, I participated in a demonstration of one voice recognition company’s generative AI medical scribe solution, which used a smartphone app that listened to a short, simulated conversation between a physician and patient and then created an entire outpatient clinic note with very little prompting from the physician. Imagine being able to document an encounter in seconds! 

Clinical reasoning, evidence integration, research

Excitingly, an enormous advantage LLM AI has over a human scribe (in addition to always finding my jokes funny and not needing appreciation lattes) is that they have access to the entire world’s collected medical knowledge. For instance, it was widely reported that the GPT-4 LLM recently passed the USMLE. This kind of clinical reasoning can be easily incorporated into your AI drafted progress notes, and a well-trained LLM can cite relevant studies in your assessment and plan to help you practice evidence-based medicine effortlessly. 

I’ll stop short of saying the LLMs I’ve used exceed the utility of curated resources such as UpToDate or Dynamed, and they don’t have enough real patient experience to match the advice of a master clinician. However, I can envision a potential application in the future to act as pocket “curbside”or an expert on-demand to help make difficult medical decisions.

For those who are academically inclined but short on time, LLM AI can also greatly speed and improve research composition and publication in a number of ways: with the right prompts, AI can create tables and figures, run statistical tests on data sets, perform literature reviews and summarize articles.

ChatGPT itself says proudly that it can “expedite medical research by analyzing large volumes of medical data and identifying patterns and trends that might otherwise be missed.” I recently published a case report with ChatGPT as part of an effort to create some of the first AI-produced peer-reviewed research and found the LLM AI to function at least at the level of a human undergraduate or medical student.

One word of caution: I did find ChatGPT to sometimes get a bit creative when asked to produce a specific source to support a general statement. Always verify those sources yourself and click that DOI.

Patient education and interaction

LLM AI can enhance patient education by providing understandable information about medical treatments, helping patients to become more engaged in their health care and leading to better outcomes.

More importantly, if structured like a chat bot, the ability to create fluent, natural speech that passes the Turing Test presents an opportunity to use a LLM AI as a patient interaction tool. ChatGPT, for instance, has been used for online mental health counseling with AI-generated answers being virtually indistinguishable from those provided by human therapists. Additionally, ChatGPT has already been deployed for direct patient interaction in medicine, with researchers in one study comparing human vs. LLM AI answers to patient questions on a public social media site where the chat bot-generated answers were rated as being higher in quality and empathy.

Combined with the ability to train LLM AI as discussed earlier, it is an easy transition to imagine an AI acting as your representative to respond to patient questions and queries on your behalf (with final editorial approval from you, of course) via email or the patient portal. For many physicians, this work is time-consuming and largely uncompensated. I see a significant potential for improving workload and work/life balance with this amazing new technology.

While more research is needed on the use of large language models in clinical practice, it’s clear that they can be an incredibly beneficial tool for health care providers. However, these models should never replace the expertise and experience of a licensed health care provider. Instead, they should be used as an aid to support and enhance clinical decision-making.

I urge health care providers to explore the possibilities of large language models in clinical practice and to stay informed on the latest research in this field. As with any new technology, it’s crucial to approach it with a critical eye and to carefully evaluate the potential benefits and risks. With judicious use, large language models hold tremendous potential for advancing medical research and improving patient outcomes.

Editor’s note: The views expressed in this article are the author’s own and do not necessarily represent the views of The DO or the AOA.

Related reading:

Artificial intelligence: Why it doesn’t belong in medicine

The doctor will video chat with you now: Perspectives on telehealth

Can doctors on social media help lead to less misinformation? These TikTok DOs hope so

Leave a comment Please see our comment policy