The cons of AI

Artificial intelligence: Why it doesn’t belong in medicine

With the rise of artificial intelligence, health care professionals will inevitably encounter AI. Deanah Jibril, DO, MS, MBA, discusses why she believes AI use should be limited by health care professionals.

Topics
Editor’s note: Read this article’s companion piece, which is in favor of AI in medicine, here.

Yes, Dr. Google does work here. The newest wave of computerized intelligence has entered medicine in the form of artificial intelligence ancillary services such as ChatGPT and others. There is a move in corporate medicine for increased reliance on these information-gathering databases and to use their powerful information technology in real-time patient care. We already use decision trees and other algorithms to enhance our best practice strategies, so why is the new ChatGPT so dangerous in the practice of medicine?

Patient encounters

We know that evidence-based medicine is the best way forward in the array of confusing and myriad pieces of information that we have available to process as clinicians. A repository of clinical knowledge at our fingertips would seem to be the answer to clear medical decision-making for the patient. Modern medicine has been evidence-based for some time, yet outcomes and improvement in morbidity and mortality still have room for improvement. While dissemination of new information for a new practice pattern can be slow, handheld applications on our phones have resolved much of this. What is missing is the value of the patient interview in clinical diagnosis. 

Recent CPT coding changes in the CMS guidelines were outlined this year; these guidelines give an increased value of the HPI (history of present illness) in coding patient encounters. This suggests that the value of evaluating background issues may be the heart of where we need to focus rather than emphasizing labs and test results.

Numerous obstacles

In addition, there are many social, ethical and moral challenges that have not been addressed in the landscape of using artificial intelligence in everyday practice. A few examples include cultural barriers and bias. These barriers already worsen health outcomes as demonstrated by multiple studies across many specialties. Culturally competent care is the ability to deliver effective health care that meets the social, cultural and language needs of a patient.

Language barriers are already hard, even with online translators. We now also have a machine language barrier to contend with in translation. The “art of medicine” is a much referred-to combination of empathetic listening and response. It is often called a “sixth sense” as intuition and nonverbal cues are picked up by a skilled physician.

One of the concerns about the possible use of generative AI such as ChatGPT to provide answers to informational health questions is the well-known feature of this technology to confabulate (or “hallucinate,” as many are calling it) answers. AI systems can simulate humans at general tasks of speaking and writing, blurring the distinction between authentic and fabricated content. AI can provide predictions and patterns but not common sense.

Limited abilities

In addition, AI is limited by the ability to input accurate information. Artificial intelligence applications such as ChatGPT have been trained on massive volumes of internet data from books, articles and Wikipedia, as well as sources focused on medicine and health care. These published materials are all not peer-reviewed, and this material can be out of context.

As it is not possible to input every factor in decision making, decisions would automatically be deemed incomplete. For example, psychosocial impact cannot be quantified. Visual cues and experience are valuable in treating patients. Sensitive subjects like sexuality are difficult to elucidate on questionnaires, and photos of body parts and genitalia may cross privacy lines.

One can plot out chess moves amid a finite number of outcomes. Working with patients is a different story, as they add new variables and parameters that are constantly evolving, so the target keeps moving. Due to this, using AI to assist in patient care may result in a delay in treatment and/or diagnosis.

The ramification of missed diagnosis is a liability issue that is unlikely to be resolved. Surgical solutions may be increased or decreased based on the algorithm, with temporal acuity being an issue of decision-making. Patient autonomy can be decreased by providing fewer treatment options that may not address complex psychosocial choices. Simplified diagnosis making can also be a waste of resources if extra labs and medications are used. Extra visits may be needed to solve the problem. While not every decision is complex in medical facts, we all have cases where we wish it could be that simple. Including the correct cultural stakeholders in treatment is a key element to the final plan.

Safety concerns

In a recent issue of JAMA Internal Medicine and HealthTap, health care questions were answered by both physicians and ChatGPT. The study revealed that answers written by ChatGPT were rated as higher quality and more empathetic than answers authored by doctors on the online site Reddit r/AskDocs.

The answers in this study were written by doctors who were credentialed only by online verification, so the quality and accuracy of this online verification is unknown. In the 90 questions evaluated, ChatGPT answers were more often graded as “Great” or “Good” than the doctors’ answers (75% vs. 62%), and ChatGPT answers were only slightly more often marked as containing errors or inaccuracies (9.3% vs. 8.5%). 

However, the JAMA study did not evaluate if any information in the answers was misleading. It seems that ChatGPT is able to provide detailed helpful answers to most consumer health questions, and that if such answers were reviewed and accepted by doctors, they could potentially improve the helpfulness of doctors’ answers. However, this is a simplistic application that doesn’t support complex medical decision-making by AI.

Cybersecurity threats are also a developing issue for medical-based AI. Advanced AI systems continue to evolve without shared safety protocols and risk management systems. Reported by Bleeping Computer, a six-month moratorium has been recommended by developers and trainers to have conversations regarding the evolution of this technology. During this pause, AI development teams will have the chance to come together and agree on establishing safety protocols, which will then be used for adherence audits performed by external, independent experts. Based on the source material, AI can be a dangerous threat to modern medicine in the form of increasingly sophisticated cyber threats and issues of accuracy. It’s an evolving issue that is sure to spark controversy until the best use of the technology develops acceptance.

We need to have consumer confidence in all our modalities of treating patients, and this includes accurate advice based on co-morbidities. While accessing medical care at the corner store seems easy, we are looking at basic care that costs less and is not suitable for complex issues. There is already enough blurring of the lines between clinician types and now we have added AI. If AI, why not wholly OTC medicine? Due to convenience, there really is Dr. Google. The real question is, who bears the brunt of the failures, as one size medicine does not fit all? The COVID-19 pandemic should have taught us the lesson that skilled physicians need to be at the forefront of medicine.

Editor’s note: The views expressed in this article are the author’s own and do not necessarily represent the views of The DO or the AOA.

Related reading

The pros of artificial intelligence in health care

The doctor will video chat with you now: Perspectives on telehealth

Leave a comment Please see our comment policy