Using our mobile app? Be sure to check for any new app updates to receive any enhancements.
Logo

Get Healthy!

ChatGPT Performs Well as 'Partner' in Diagnosing Patients
  • Posted December 12, 2023

ChatGPT Performs Well as 'Partner' in Diagnosing Patients

Doctor's brains are great decision-makers, but even the smartest physicians might be well-served with a little diagnostic help from ChatGPT, a new study suggests.

The main benefit comes from a thinking process known as "probabilistic reasoning" -- knowing the odds that something will (or won't) happen.

"Humans struggle with probabilistic reasoning, the practice of making decisions based on calculating odds,"explained study lead author Dr. Adam Rodman, of Beth Israel Deaconess Medical Center in Boston.

"Probabilistic reasoning is one of several components of making a diagnosis, which is an incredibly complex process that uses a variety of different cognitive strategies," he explained in a Beth Israel news release. "We chose to evaluate probabilistic reasoning in isolation, because it is a well-known area where humans could use support."

The Beth Israel team utilized data from a previously published survey of 550 health care practitioners. All had been asked to perform probabilistic reasoning to diagnose five separate medical cases.

In the new study, however, Rodman's team gave the same five cases to ChatGPT's AI algorithm, the Large Language Model (LLM), ChatGPT-4.

The cases included information from common medical tests, such as a chest scan for pneumonia, a mammography for breast cancer, a stress test for coronary artery disease and a urine culture for urinary tract infection.

Based on that info, the chatbot used its own probabilistic reasoning to reassess the likelihood of various patient diagnoses.

Of the five cases, the chatbot was more accurate than the human doctor for two; similarly accurate for another two; and less accurate for one. The researchers considered this a "draw" when comparing humans to the chatbot for medical diagnoses.

But the ChatGPT-4 chatbot excelled when a patients' tests came back negative (rather than positive), becoming more accurate at diagnosis than the doctors in all five cases.

"Humans sometimes feel the risk is higher than it is after a negative test result, which can lead to over-treatment, more tests and too many medications,"Rodman pointed out. He's an internal medicine physician and investigator in the department of medicine at Beth Israel.

The study was published Dec. 11 in JAMA Network Open.

It's possible then that doctors may someday work in tandem with AI to become even more accurate in patient diagnosis, the researchers said.

Rodman called that prospect "exciting."

"Even if imperfect, their [chatbots'] ease of use and ability to be integrated into clinical workflows could theoretically make humans make better decisions,"he said. "Future research into collective human and artificial intelligence is sorely needed."

More information

Find out more about AI and medicine at Harvard University.

SOURCE: Beth Israel Deaconess Medical Center, news release, Dec. 11, 2023

HealthDay
Health News is provided as a service to The Medicine Shoppe #503 site users by HealthDay. The Medicine Shoppe #503 nor its employees, agents, or contractors, review, control, or take responsibility for the content of these articles. Please seek medical advice directly from your pharmacist or physician.
Copyright © 2024 HealthDay All Rights Reserved.