International

ChatGPT's medical prowess: Is AI chatbot the new doctor

According to a Mass Common Brigham news release, the study, published in the Journal of Medical Web Analysis, revealed that ChatGPT was around 72% correct when it came to common decision-making from the first point of contact with a patient

Advertisement

Person using ChatGPT on his laptop
info_icon

According to research at Mass General Brigham in Boston, ChatGPT showed a diagnostic success rate comparable to that of a newly graduated medical student when given case studies.
The study, published in the Journal of Medical Internet Research, stated that ChatGPT was approximately 72% accurate in general decision-making. According to a Mass General Brigham news release, "from developing potential diagnoses to making final diagnoses and care management decisions, it was 77% accurate in making a final diagnosis.
According to Dr. Marc Succi, the study's corresponding author, researchers evaluated how ChatGPT could give decision-making support from the initial interaction with a patient through the process of running tests, diagnosing illness, and monitoring care.
The statement stated that the study was built on the idea that AI could be part of the initial evaluation of a patient, suggest what tests or screens to run, figure out a treatment plan, and make a final diagnosis.
The researchers began by pasting "successive parts of 36 standardized, published clinical vignettes into ChatGPT." The AI was then requested to generate a list of possible diagnoses based on the patient's information, which included age, gender, symptoms, and whether or not it was a medical emergency. After that, the AI was given more information and instructed to make those care management decisions, as well as diagnose the situation, "simulating the entire process of seeing a real patient." The effectiveness of the ChatGPT's efforts was assessed in a blinded procedure.
The AI performed best on final diagnosis and lowest on differential diagnosis, scoring around 60%. It achieved 68% in clinical management decisions, such as determining which medications to provide after correctly diagnosing the patient. To their surprise, ChatGPT had no gender bias and performed equally well in primary care and emergency care.
The researchers' next study will look at whether AI can relieve some of the strain on hospital systems in "resource-constrained areas" to help with patient care and results.
 

Advertisement

Advertisement