TY - GEN
T1 - Low Resource Causal Event Detection from Biomedical Literature
AU - Liang, Zhengzhong
AU - Noriega-Atala, Enrique
AU - Morrison, Clayton
AU - Surdeanu, Mihai
N1 - Publisher Copyright:
© 2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - Recognizing causal precedence relations among the chemical interactions in biomedical literature is crucial to understanding the underlying biological mechanisms. However, detecting such causal relation can be hard because: (1) many times, such causal relations among events are not explicitly expressed by certain phrases but implicitly implied by very diverse expressions in the text, and (2) annotating such causal relation detection datasets requires considerable expert knowledge and effort. In this paper, we propose a strategy to address both challenges by training neural models with in-domain pre-training and knowledge distillation. We show that, by using very limited amount of labeled data, and sufficient amount of unlabeled data, the neural models outperform previous baselines on the causal precedence detection task, and are ten times faster at inference compared to the BERT base model.
AB - Recognizing causal precedence relations among the chemical interactions in biomedical literature is crucial to understanding the underlying biological mechanisms. However, detecting such causal relation can be hard because: (1) many times, such causal relations among events are not explicitly expressed by certain phrases but implicitly implied by very diverse expressions in the text, and (2) annotating such causal relation detection datasets requires considerable expert knowledge and effort. In this paper, we propose a strategy to address both challenges by training neural models with in-domain pre-training and knowledge distillation. We show that, by using very limited amount of labeled data, and sufficient amount of unlabeled data, the neural models outperform previous baselines on the causal precedence detection task, and are ten times faster at inference compared to the BERT base model.
UR - http://www.scopus.com/inward/record.url?scp=85149142155&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85149142155&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85149142155
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 252
EP - 263
BT - BioNLP 2022 @ ACL 2022 - Proceedings of the 21st Workshop on Biomedical Language Processing
PB - Association for Computational Linguistics (ACL)
T2 - 21st Workshop on Biomedical Language Processing, BioNLP 2022 at the Association for Computational Linguistics Conference, ACL 2022
Y2 - 26 May 2022
ER -