While health information is hugely abundant online, the accuracy and content level of information may be questionable or not written in a way that is genuinely accessible to the general public. Most adults in the U.S. read at an 8th-grade reading level with over 20% of individuals comprehending levels below this.2 Current recommendations from the American Medical Association state that all patient health information should be written at a 6th-grade level or below to improve health literacy and provide better outcomes for patients.3
In the field of Urology, a frequent source of patient questions surrounds the treatment and management of kidney stones. Our article “Application of Artificial Intelligence to Patient-Targeted Health Information on Kidney Stone Disease” investigates the difference in content reading level and quality level for the top 50 online articles identified when searching for “Kidney Stone Prevention” and “Kidney Stone Treatment” in comparison to responses generated with AI using Chat GPT.
The online articles and AI-generated responses were evaluated for their approximate reading level using four extensively validated readability tools: the Flesch-Kincaid Reading Ease score, the Flesch-Kincaid Grade Level score, the Gunning Fog Index, and the Simple Measure of Gobbledygook (SMOG) score. The quality of health information was assessed using the DISCERN Instrument and JAMA Benchmark tools designed to identify the accuracy of health information and evaluate the writing practices of the text’s authors.
Our results found that the online articles for “Kidney stone prevention” and “Kidney stone treatment” were typically written between a 10th and 12th-grade reading level while the AI-generated responses were produced at a 5th or 6th-grade level. Chat GPT also outperformed online articles with respect to the quality of health information with the AI text receiving a score of “good” quality by the DISCERN instrument tool compared to an average quality of “fair” among online articles. When the source of online articles was isolated, Chat GPT answers even continued to outperform content produced by academic, commercial, hospital-affiliated, and non-profit organizations’ web pages in both metrics.
Conclusion: Low health literacy remains an ongoing challenge for patients and healthcare providers. Every day patients seek answers online for health problems such as Kidney Stones and are faced with content that is tedious and difficult to understand. While greater efforts will be necessary to completely address the barriers to health literacy in the United States, our study has shown the power that AI tools such as Chat GPT may play in tackling this goal in the future.
Written by: Matthew Carter,1 Reza Kianian,1 Sriram Eleswarapu,1 Naveen Kachroo2
- Department of Urology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA.
- Department of Urology, Vattikuti Urology Institute, Henry Ford Hospital, Detroit, MI.
- Cohen RA, Adams PF. Use of the internet for health information: United States, 2009. NCHS Data Brief. 2011;(66):1-8
- Kirsch IS, Jungeblut A, Jenkins L, Kolstad A. Adult literacy in America: A first look at the findings of the National Adult Literacy Survey. [Internet]. Washington D.C.: National Center for Education Statistics (NCES), a part of the U.S. Department of Education; 1993 [cited 2023 Apr 10] p. 1-201.
- Weiss BD. Health literacy: a manual for clinicians. [Internet]. Chicago: American Medical Association Foundation and American Medical Association; 2007 [cited 2023 Apr 10] p. 1–51.