Evaluation of the Current Status of Artificial Intelligence for Endourology Patient Education: A Blind Comparison of ChatGPT and Google Bard against Traditional Information Resources.

Introduction Artificial intelligence (AI) platforms such as ChatGPT and Bard are increasingly utilized to answer patient healthcare questions. We present the first study to blindly evaluate AI-generated responses to common endourology patient questions against official patient education materials. Methods 32 questions and answers spanning kidney stones, ureteral stents, BPH, and UTUC were extracted from official Urology Care Foundation (UCF) patient education documents. The same questions were input into ChatGPT 4.0 and Bard, limiting responses to within  10% of the word count of the corresponding UCF response to ensure fair comparison. Six endourologists blindly evaluated responses from each platform using Likert scales for accuracy, clarity, comprehensiveness, and patient utility. Reviewers identified which response they believed was not AI-generated. Lastly, Flesch-Kincaid Reading Grade Level formulas assessed the readability of each platform response. Ratings were compared using ANOVA and Chi-Square tests. Results ChatGPT responses were rated the highest across all categories including accuracy, comprehensiveness, clarity, and patient utility while UCF answers were consistently scored the lowest, all p<0.01. Sub-analysis revealed that this trend was consistent across question categories (i.e., kidney stones, BPH, etc.). However, AI-generated responses were more likely to be classified at an advanced reading level while UCF responses showed improved readability (college or higher reading level: ChatGPT = 100%, Bard = 66%, UCF = 19%), p<0.001. When asked to identify which answer was not AI-generated, 54.2% of responses indicated ChatGPT, 26.6% indicated Bard, and only 19.3% correctly identified it as the UCF response. Conclusions In a blind evaluation, AI-generated responses from ChatGPT and Bard surpassed the quality of official patient education materials in endourology, suggesting that current AI platforms are already a reliable resource for basic urologic care information. AI-generated responses do, however, tend to require a higher reading level, which may limit their applicability to a broader audience.

Journal of endourology. 2024 Mar 05 [Epub ahead of print]

Christopher Connors, Kavita Gupta, Johnathan Alexander Khusid, Raymond Khargi, Alan Yaghoubian, Micah Levy, Blair Gallante, William Atallah, Mantu Gupta

Icahn School of Medicine at Mount Sinai, 5925, Urology, New York, New York, United States; ., Icahn School of Medicine at Mount Sinai, 5925, Urology, New York, New York, United States; ., Icahn School of Medicine at Mount Sinai, 5925, Urology, 1 Gustave Levy Pl., New York, New York, United States, 10029-6574; ., Icahn School of Medicine at Mount Sinai, 5925, Urology, New York, New York, United States; ., University of California Los Angeles David Geffen School of Medicine, 12222, Urology, Los Angeles, California, United States; ., Icahn School of Medicine at Mount Sinai, 5925, Urology, 1 Gustave L. Levy Place, New York, New York, United States, 10029; ., Icahn School of Medicine at Mount Sinai, 5925, Urology, 425 W. 59th Street, Suite 4F, New York, New York, United States, 10019; ., Icahn School of Medicine at Mount Sinai, 5925, Urology, New York, New York, United States; ., Icahn School of Medicine at Mount Sinai, 5925, Urology, New York, New York, United States; .