Background/Objectives: Accurate and accessible perioperative health information empowers patients and enhances recovery outcomes. Artificial intelligence tools, such as ChatGPT, have garnered attention for their potential in health communication. This study evaluates the accuracy and readability of responses generated by ChatGPT to questions commonly asked about breast cancer. Methods: Fifteen simulated patient queries about breast cancer surgery preparation and recovery were prepared. Responses generated by ChatGPT (4o version) were evaluated for accuracy by a pool of breast surgeons using a 4-point Likert scale. Readability was assessed with the Flesch–Kincaid Grade Level (FKGL). Descriptive statistics were used to summarize the findings. Results: Of the 15 responses evaluated, 11 were rated as “accurate and comprehensive”, while 4 out of 15 were deemed “correct but incomplete”. No responses were classified as “partially incorrect” or “completely incorrect”. The median FKGL score was 11.2, indicating a high school reading level. While most responses were technically accurate, the complexity of language exceeded the recommended readability levels for patient-directed materials. Conclusions: The model shows potential as a complementary resource for patient education in breast cancer surgery, but should not replace direct interaction with healthcare providers. Future research should focus on enhancing language models’ ability to generate accessible and patient-friendly content.

Assessing the Accuracy and Readability of Large Language Model Guidance for Patients on Breast Cancer Surgery Preparation and Recovery

Lando, Stefania;Cagol, Matteo;Gregori, Dario;Lorenzoni, Giulia
2025

Abstract

Background/Objectives: Accurate and accessible perioperative health information empowers patients and enhances recovery outcomes. Artificial intelligence tools, such as ChatGPT, have garnered attention for their potential in health communication. This study evaluates the accuracy and readability of responses generated by ChatGPT to questions commonly asked about breast cancer. Methods: Fifteen simulated patient queries about breast cancer surgery preparation and recovery were prepared. Responses generated by ChatGPT (4o version) were evaluated for accuracy by a pool of breast surgeons using a 4-point Likert scale. Readability was assessed with the Flesch–Kincaid Grade Level (FKGL). Descriptive statistics were used to summarize the findings. Results: Of the 15 responses evaluated, 11 were rated as “accurate and comprehensive”, while 4 out of 15 were deemed “correct but incomplete”. No responses were classified as “partially incorrect” or “completely incorrect”. The median FKGL score was 11.2, indicating a high school reading level. While most responses were technically accurate, the complexity of language exceeded the recommended readability levels for patient-directed materials. Conclusions: The model shows potential as a complementary resource for patient education in breast cancer surgery, but should not replace direct interaction with healthcare providers. Future research should focus on enhancing language models’ ability to generate accessible and patient-friendly content.
2025
File in questo prodotto:
File Dimensione Formato  
jcm-14-05411.pdf

accesso aperto

Tipologia: Published (Publisher's Version of Record)
Licenza: Creative commons
Dimensione 299.18 kB
Formato Adobe PDF
299.18 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3561581
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact