close
close
Generative AI improves readability of patient education materials

The team tested two generative AI tools – GPT-4 and Google Gemini – to see how well they could make radiology-related materials more patient-friendly. They selected seven recent brochures from a large radiology practice for the large language models to reword and assessed their reading ability before and after adaptation.

Three radiologists reviewed the redesigned materials for appropriateness, relevance, and clarity. The original brochures were written at an average reading level of 11.72.

ChatGPT reduced word count by around 15%, with 95% of papers retaining 75% of their essential information; Gemini reduced the number even more significantly, by 33%, and retained 75% of the required content in 68% of brochures.

Both LLMs were able to adapt the materials to meet an average sixth- and seventh-grade reading level. ChatGPT outperformed Gemini in appropriateness (95% vs. 57%), clarity (92% vs. 67%), and relevance (95% vs. 76%). Inter-rater agreement was also significantly better for ChatGPT.

“Unlike traditional methods that require manual rewriting, our AI-driven approach may provide a more efficient and scalable solution,” the authors suggested. “Although limitations exist, the potential benefits for patient education and health outcomes are significant, warranting further investigation and integration of these AI tools in radiology and potentially across different medical specialties.”

The group added that further refinement of specifically informative medical materials could improve the performance of the LLMs.

By Olivia

Leave a Reply

Your email address will not be published. Required fields are marked *