Multi-label annotation of text reports from computed tomography of the chest, abdomen, and pelvis using deep learning

Vincent M. D’Anniballe, Fakrul Islam Tushar, Khrystyna Faryna, Songyue Han, Maciej A. Mazurowski, Geoffrey D. Rubin, Joseph Y. Lo

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Background: There is progress to be made in building artificially intelligent systems to detect abnormalities that are not only accurate but can handle the true breadth of findings that radiologists encounter in body (chest, abdomen, and pelvis) computed tomography (CT). Currently, the major bottleneck for developing multi-disease classifiers is a lack of manually annotated data. The purpose of this work was to develop high throughput multi-label annotators for body CT reports that can be applied across a variety of abnormalities, organs, and disease states thereby mitigating the need for human annotation. Methods: We used a dictionary approach to develop rule-based algorithms (RBA) for extraction of disease labels from radiology text reports. We targeted three organ systems (lungs/pleura, liver/gallbladder, kidneys/ureters) with four diseases per system based on their prevalence in our dataset. To expand the algorithms beyond pre-defined keywords, attention-guided recurrent neural networks (RNN) were trained using the RBA-extracted labels to classify reports as being positive for one or more diseases or normal for each organ system. Alternative effects on disease classification performance were evaluated using random initialization or pre-trained embedding as well as different sizes of training datasets. The RBA was tested on a subset of 2158 manually labeled reports and performance was reported as accuracy and F-score. The RNN was tested against a test set of 48,758 reports labeled by RBA and performance was reported as area under the receiver operating characteristic curve (AUC), with 95% CIs calculated using the DeLong method. Results: Manual validation of the RBA confirmed 91–99% accuracy across the 15 different labels. Our models extracted disease labels from 261,229 radiology reports of 112,501 unique subjects. Pre-trained models outperformed random initialization across all diseases. As the training dataset size was reduced, performance was robust except for a few diseases with a relatively small number of cases. Pre-trained classification AUCs reached > 0.95 for all four disease outcomes and normality across all three organ systems. Conclusions: Our label-extracting pipeline was able to encompass a variety of cases and diseases in body CT reports by generalizing beyond strict rules with exceptional accuracy. The method described can be easily adapted to enable automated labeling of hospital-scale medical data sets for training image-based disease classifiers.

Original languageEnglish (US)
Article number102
JournalBMC Medical Informatics and Decision Making
Volume22
Issue number1
DOIs
StatePublished - Dec 2022
Externally publishedYes

Keywords

  • Attention RNN
  • Computed tomography
  • Natural language processing
  • Report labeling
  • Rule-based algorithm
  • Weak supervision

ASJC Scopus subject areas

  • Health Policy
  • Health Informatics

Fingerprint

Dive into the research topics of 'Multi-label annotation of text reports from computed tomography of the chest, abdomen, and pelvis using deep learning'. Together they form a unique fingerprint.

Cite this