Adapting state-of-the-art deep language models to clinical information extraction systems: Potentials, challenges, and solutions
Loading...
Date
Authors
Zhou, Liyuan
Suominen, Hanna
Gedeon, Tom
Journal Title
Journal ISSN
Volume Title
Publisher
JMIR Publications Inc
Abstract
Background: Deep learning (DL) has been widely used to solve problems with success in speech recognition, visual object
recognition, and object detection for drug discovery and genomics. Natural language processing has achieved noticeable progress
in artificial intelligence. This gives an opportunity to improve on the accuracy and human-computer interaction of clinical
informatics. However, due to difference of vocabularies and context between a clinical environment and generic English,
transplanting language models directly from up-to-date methods to real-world health care settings is not always satisfactory.
Moreover, the legal restriction on using privacy-sensitive patient records hinders the progress in applying machine learning (ML)
to clinical language processing.
Objective: The aim of this study was to investigate 2 ways to adapt state-of-the-art language models to extracting patient
information from free-form clinical narratives to populate a handover form at a nursing shift change automatically for proofing
and revising by hand: first, by using domain-specific word representations and second, by using transfer learning models to adapt
knowledge from general to clinical English. We have described the practical problem, composed it as an ML task known as
information extraction, proposed methods for solving the task, and evaluated their performance.
Methods: First, word representations trained from different domains served as the input of a DL system for information extraction.
Second, the transfer learning model was applied as a way to adapt the knowledge learned from general text sources to the task
domain. The goal was to gain improvements in the extraction performance, especially for the classes that were topically related
but did not have a sufficient amount of model solutions available for ML directly from the target domain. A total of 3 independent
datasets were generated for this task, and they were used as the training (101 patient reports), validation (100 patient reports),
and test (100 patient reports) sets in our experiments.
Results: Our system is now the state-of-the-art in this task. Domain-specific word representations improved the macroaveraged
F1 by 3.4%. Transferring the knowledge from general English corpora to the task-specific domain contributed a further 7.1%
improvement. The best performance in populating the handover form with 37 headings was the macroaveraged F1 of 41.6% and
F1 of 81.1% for filtering out irrelevant information. Performance differences between this system and its baseline were statistically
significant (P<.001; Wilcoxon test).
Conclusions: To our knowledge, our study is the first attempt to transfer models from general deep models to specific tasks in
health care and gain a significant improvement. As transfer learning shows its advantage over other methods, especially on classes
with a limited amount of training data, less experts’ time is needed to annotate data for ML, which may enable good results even
in resource-poor domains.
Description
Citation
Collections
Source
JMIR Medical Informatics
Type
Book Title
Entity type
Access Statement
Open Access
License Rights
Creative Commons Attribution 4.0 International License