International

ChatGPT matches doctors in medical diagnoses; reports

AI 'doctors' could be a beneficial tool for 'practice of medicine and clinical decision making,' according to a researcher

Advertisement

Person using ChatGPT on his mobile
info_icon

According to research conducted at Mass General Brigham in Boston, ChatGPT , when provided with case studies, demonstrated a diagnostic success rate comparable to that of a recently graduated medical student.
According to the study's corresponding author, Dr. Marc Succi, researchers examined how ChatGPT could aid in decision-making from the initial patient encounter through the process of conducting assessments, diagnosing illness, and managing care.
The research co-author on the possibilities of AI: "Mass General Brigham sees great promise for large-language models to assist improve care supply and clinician expertise," stated Adam Landman, the chief data officer and senior VP of digital on the health system, which has a large research arm and billions of dollars in funding.
Dr. Marc Succi, whose titles include affiliate chair of innovation and commercialization and strategic innovation chief at Mass Common Brigham, stated that ChatGPT "has the potential to be an augmenting instrument for the use of drugs and assist scientific decision making with spectacular accuracy."
The study, which was published in the Journal of Medical Web Analysis, found that ChatGPT was approximately 72% accurate when it came to common decision-making, "from generating potential diagnoses to making final diagnoses and care administration choices," according to a Mass General Brigham press release. Surprisingly the conclusion reached was 77% precise. ChatGPT had no gender bias and performed equally well in primary care and emergency care.
Doctors probably don't need to worry about AI making them obsolete - at least not yet. "ChatGPT struggled with differential diagnosis, which is the meat and potatoes of medicine when a physician has to figure out what to do," Succi wrote in a statement. "This is significant because it indicates the early stages of patient treatment, when there is little presenting information and a list of potential diagnoses is needed, when physicians are actually experts and delivering the greatest value."
The researchers did mention certain limitations, such as "possible model hallucinations and the unclear composition of ChatGPT's training data set." A model hallucination, according to many online sources, is a confident diagnosis that does not appear to be supported by the evidence provided. In their upcoming study, the researchers will examine if AI can relieve some of the strain on hospital systems in "resource-constrained areas" in order to improve patient care and results.
 

Advertisement

Advertisement