close
close
ChatGPT has been proven to save time in answering patient inquiries

The groundbreaking chatbot ChatGPT offers potential as a time-saving tool for answering patient questions to urologists’ offices, according to a study in the September issue of . Urological practice®an official journal of the American Urological Association (AUA). The journal is published in the Lippincott portfolio of Wolters Kluwer.

The artificial intelligence (AI) tool generated “acceptable” answers to nearly half of real patients’ questions, according to the new study by Dr. Michael Scott, a urologist at Stanford University School of Medicine. “Generative AI technologies can play a valuable role in providing quick and accurate answers to routine patient questions – potentially alleviating patient anxiety while freeing up clinic time and resources to complete other complex tasks,” commented Dr. Scott.

Can ChatGPT accurately answer questions from urology patients?

ChatGPT is an innovative Large Language Model (LLM) that has attracted interest in many fields, including health and medicine. In some recent studies, ChatGPT has performed well in answering various types of medical questions, although its performance in urology is less well established.

Modern electronic health record (EHR) systems allow patients to send medical questions directly to their doctor. “This shift is associated with increased time spent by physicians using the EHR, with a large portion of this attributable to messages in patients’ inboxes,” the researchers write. According to one study, each message in a physician’s inbox increases time spent using the EHR by more than two minutes.

Dr. Scott and his colleagues collected 100 electronic patient messages requesting medical advice from a urologist at a men’s clinic. The messages were categorized by content type and difficulty level and then entered into ChatGPT. Five experienced urologists rated each AI-generated response for accuracy, completeness, usefulness, and understandability. The raters also indicated whether they would send each response to a patient.

Results support “generative AI technology to improve clinical efficiency”

Responses generated by ChatGPT were rated as accurate (average 4.0 on a five-point scale); and understandable (average 4.7). Ratings for completeness and usefulness were lower, but the risk of harm was low or nonexistent. Ratings were comparable for different types of question content (symptoms, postoperative concerns, etc.).

“Overall, 47% of responses were deemed acceptable to send to patients,” the researchers write. For questions classified as “easy,” the rate of acceptable responses was higher: 56%, compared to 34% for “difficult” questions.

“These results hold promise for the use of generative AI technology to improve clinical efficiency,” write Dr. Scott and co-authors. The findings “suggest the feasibility of integrating this new technology into clinical care to improve efficiency while maintaining the quality of patient communication.”

The researchers point out some potential drawbacks of the answers to patient questions generated by ChatGPT: “ChatGPT’s model is generally trained using information from the Internet, not validated medical sources,” which “risks generating inaccurate or misleading answers.” The authors also stress the need for safeguards to protect patient privacy.

While our study provides an interesting starting point, further research is needed to validate the use of LLMs to answer patient questions in both urology and other specialties. This will be a potentially valuable application in healthcare, especially given the ongoing advances in AI technology.”

Dr. Michael Scott, MD, urologist at Stanford University School of Medicine

Source:

Journal reference:

Scott, M., et al. (2024) Evaluation of artificial intelligence-generated responses to messages in the inbox of urological patients. Urological practice. doi.org/10.1097/UPJ.0000000000000637.

By Olivia

Leave a Reply

Your email address will not be published. Required fields are marked *