Detect Diseases, Write Emotional Support Notes To Patients: How AI Is Making A Mark In Medicine

AI for healing: In the era of ChatGPT, how can AI chatbots help if you are a health care professional, medical student, or a sick person? The scope is wider than previously perceived.  

How AI driven tools can change how hospitals approach diagnosis and healing (Representative Image)

Though still at a nascent stage, the emerging technology of AI chatbots, specifically GPT-4 (the latest version of ChatGPT) can do many things. It can write computer programs for processing and visualizing data, translate foreign languages, interpret laboratory tests for readers unfamiliar with the language, and, as it turns out, even write emotionally supportive notes to help heal sick patients.

The growth of Artificial Intelligence has led to several breakthroughs in the field of medicine. AI models have been proven to assist in the detection of disease and the prediction of prognosis, which may help provide more personalized treatment recommendations and improve patient outcomes.

In a research paper titled “Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine” published by the New England Journal of Medicine (NEJM), authors Peter Lee, Sebastien Bubeck, and Joseph Petro, state that the uses of AI in medicine have been growing in many areas. 

ChatGPT creator OpenAI, with support from Microsoft Research has been studying the possible uses of GPT-4 in health care and medical applications for the past six months to better understand its fundamental capabilities, limitations, and risks to human health.

Several notable AI chatbots have been studied for medical applications including LaMDA (Google) and GPT-3.5, the predecessor system to GPT-4.

While LaMDA, GPT-3.5, and GPT-4  have been trained entirely on data obtained from open sources on the Internet including available medical texts, research papers, they are not trained on the restricted medical data found in an electronic health record system in a healthcare organization, or any medical information that exists solely on the private network of a medical school or other similar organization.  Still, these systems show varying degrees of competence in medical applications.

The authors say that even though GPT-4 was trained only on openly available information on the Internet when it is given a battery of test questions, it answers correctly more than 90% of the time.

“The medical knowledge encoded in GPT-4 may be used for a variety of tasks in consultation, diagnosis, and education,” the JAMA  article says.

“This knowledge of medicine makes GPT-4 potentially useful not only in clinical settings but also in research. GPT-4 can read medical research material and engage in an informed discussion about it, such as briefly summarizing the content, providing technical analysis, identifying relevant prior work, assessing the conclusions, and asking possible follow-up research questions,” the JAMA says.

In another article published by PubMed Central (PMC), biomedical and life sciences journal literature at the U.S. National Institutes of Health's National Library of Medicine, the authors argue for embracing ChatGPT.

“AI models can assist in the detection of disease and the prediction of prognosis, which may help provide more personalized treatment recommendations and improve patient outcomes,” they say.

“The best course of action is to embrace it, use its capabilities to improve our lives, and foster mutually beneficial relationships by evolving it in clinical medicine,” the authors say.

Three authors, Vivian Weiwen Xue, Pinggui Lei, and William C. Cho, of the article, “The potential impact of ChatGPT in clinical and translational medicine,” cite an example of the diagnosis and treatment of mental illness saying it relies heavily on doctor–patient questionnaires, interviews and judgement.

Therefore, they say, many interfering factors such as the physician's tone of voice, mood and the surrounding environment can hinder an accurate assessment of the disease. “In this field, AI has been used to record and analyze data related to questionnaires. The emergence of ChatGPT may accelerate the integration of flexible questionnaires, documentation, diagnosis and follow‐up of patients with mental disorders using chatbots.”

 In addition, they say, ChatGPT provides a basis for more flexible and efficient epidemiological research. “Indeed, epidemiological research also relies on efficient and reliable data collection, recording and analysis.”

 ChatGPT not only solves the difficulty of remote inquiries, but also helps to reduce labour requirements to complete the work. Besides, ChatGPT is usually more accurate and faster than manual statistics and records. 

ChatGPT, however, has caused some controversies too in medical writing as it has been used to finish writing an abstract, introduction or even the main text of an assignment or article. But, the authors say,  it might not do well in writing the creative part of medical writing as ChatGPT often summarizes previous research and data to form abstracts or background knowledge.

Some scientific journals noted that articles containing ChatGPT‐generated content need to be explicitly listed as authors, while Nature refused to accept ChatGPT as an author on articles because it cannot be responsible for content generated by itself.

But the authors conclude that AI models including ChatGPT can help the healthcare industry by providing a more objective and evidence‐based approach to decision‐making, reducing the risk of human error based on its unparalleled speed of information processing.