Consider the ramifications of AI and medicine: the physician-patient consultation.

The particular example I use involves ChatGPT. While the program can’t replace the value of a medical professional, recent research suggests that it may have a better “bedside manner.”

An article in JAMA (Journal of the American Medical Association) Network describes a study that compared how ChatGPT and doctors responded to 200 patients’ questions on the online forum, r/AskDocs, is a subreddit (a category within Reddit). It has about 474,000 members. Users submit medical questions. Anyone can answer, but verified healthcare professional volunteers include their credentials with their answers.

Professionals in pediatrics, internal medicine, oncology, and infectious diseases scored the human and bot answers on a five-point scale that measured the quality of information and empathy in the responses.

They rated the chatbot’s responses as 3.6 times higher in quality. Empathy was rated 9.8 times higher.

The length of the response may have influenced these ratings. An example cited in the study involved a person who feared going blind after splashing some bleach in the eye. Chat GPT delivered a seven-sentence-long response. A doctor wrote “Sounds like you’ll be fine” and included the phone number for Poison Control.

The clinicians involved in this study suggest further research into using AI assistants. They point out that to some extent this is already occurring, with reliance on canned messages or having support staff respond. An AI-assisted approach could give staff time for more involved tasks. They also believe that review of the AI responses could help both clinicians and staff to improve their communication skills.

If the use of AI results in questions being answered quickly, to a high standard, and with empathy, people could avoid unnecessary visits to the doctor. It could ease the difficulties of mobility limitations, inability to take time off from work for an appointment, and elevated medical bills.

The authors of this report are candid about its limitations. They only considered the elements of empathy and quality in a general way. They didn’t evaluate patient assessments of the AI responses.

They also acknowledge ethical concerns, especially the accuracy of AI responses, including false and/or fabricated information.

In my view, they should have elaborated on this point. Artificial information can never be more accurate and non-partisan than the humans who input this information.

In an evaluation separate from the r/Ask Docs study, Dr. David Asch, a professor of medicine and senior vice dean at the University of Pennsylvania (where I earned my Bachelors and Masters Degrees in Nursing) describes ChatGPT as, well, chatty. “It didn’t sound like someone talking to me. It sounded like someone trying to be very comprehensive.”

Researchers agree that an AI-generated diagnosis should always be backed up by human review. Imagine the legal issues that could arise from a chatbot misdiagnosis. Who would the plaintiff sue? The bot? Obviously, this would be impossible, but what about the designers? Would the supervising medical professional be held liable?

Another issue is built-in bias. If you have predominantly male and white people programming AI, you will end up with biased results.

The New York Times addressed this issue in an article that addressed multiple racial issues with AI. An example is that AI could not identify Black faces.

In other studies, researchers found that AI seemed programmed principally to understand “whitespeak.” It gave less coherent answers to questions from Black people.

This poses additional questions about AI and medicine. The U.S. has many residents who speak English as a second language. Imagine the difficulty an AI program would have understanding them.

Given increasing data about the discriminatory treatment in the medical system towards people of color, the certainty that these inequities will extend to artificial intelligence programming is cause for concern.

Another issue that needs exploring is how people would feel if they knew they were talking to a bot and not a human. I saw no evidence in the JAMA article that questioners knew whether a bot or a human answered them.

AI and Medicine: Examples of Current Usage

 Artificial intelligence (AI)-powered programs are increasingly taking over various medical testing functions. There are legal cases in which the reliability of a pulse oximeter or other measurement device to came into question. The influx of AI into medicine, while it may generate more accurate and certainly faster, results, also has more potential for erroneous readings.

I’ve listed here some of the most prominent programs currently in use, mainly to give you a picture of how much AI technologies have become part of medical testing. You can be sure that there will be many additions to these. I describe a few to give you an idea of the current range of functions AI performs.

This is a good time to again mention the “Garbage In, Garbage Out” (GIGO) truism. The output of data can never be better than the input.

Arterys This company has designed a product that reduces the time required for a cardiac scan from an hour to six to ten minutes. It obtains data about  heart anatomy, blood-flow rate, and blood-flow direction.

Enlitic  This program analyzes radiological scans up to 10,000 times faster than a radiologist and its designers claim that it’s 50% faster at classifying tumors with a zero percent error rate.

K’Watch Glucose This product provides constant glucose monitoring.

Qardio This provides a wireless ECG. The company claims that a person with limited medical knowledge can easily use it. It requires the use of a smartphone.

Sentrian This product can monitor blood sugar or another chronic disease statistic. Presumably, it allows its user to use the data to anticipate a problem. Sentrian recommends changes in patient medications and behavior to prevent a medical crisis. This reduces hospitalizations, which reduces medical costs.

You can foresee that there will be many more applications of AI and medicine. Will you get to the point where your next doctor will be AI, who will diagnose and treat you?

How would you feel about being answered by AI? Would you trust it? Would you feel cheated if you did not get a human response? If you’d like to post your answer here, I’d love to read it.

Pat IyerPat Iyer MSN RN LNCC is a consultant, speaker, author, editor and coach. She has written or edited over 60 of her own books and worked with a few dozen authors. Pat is an Amazon international #1 bestselling author. Coaches, consultants, and speakers hire Pat to help release the knowledge inside them so that they can attract their ideal clients.

She delights in assisting people to share their expertise by writing. Pat serves international and national experts as an editor, book coach, and a medical and business writer.

Her profile picture is AI-generated from a photo.