Abstract
Since millions seek health information online, it is vital for this information to be comprehensible. Most studies use readability formulas, which ignore vocabulary, and conclude that online health information is too difficult. We developed a vocabularly-based, naïve Bayes classifier to distinguish between three difficulty levels in text. It proved 98% accurate in a 250-document evaluation. We compared our classifier with readability formulas for 90 new documents with different origins and asked representative human evaluators, an expert and a consumer, to judge each document. Average readability grade levels for educational and commercial pages was 10th grade or higher, too difficult according to current literature. In contrast, the classifier showed that 70-90% of these pages were written at an intermediate, appropriate level indicating that vocabulary usage is frequently appropriate in text considered too difficult by readability formula evaluations. The expert considered the pages more difficult for a consumer than the consumer did.
Original language | English (US) |
---|---|
Pages (from-to) | 1409-1419 |
Number of pages | 11 |
Journal | Journal of the American Society for Information Science and Technology |
Volume | 59 |
Issue number | 9 |
DOIs | |
State | Published - Jul 2008 |
Externally published | Yes |
ASJC Scopus subject areas
- Software
- Information Systems
- Human-Computer Interaction
- Computer Networks and Communications
- Artificial Intelligence