TY - GEN
T1 - Lightly-supervised representation learning with global interpretability
AU - Zupon, Andrew
AU - Alexeeva, Maria
AU - Valenzuela-Escárcega, Marco A.
AU - Nagesh, Ajay
AU - Surdeanu, Mihai
N1 - Funding Information:
This work was supported by the Defense Advanced Research Projects Agency (DARPA) under the Big Mechanism program, grant W911NF-14-1-0395, and by the Bill and Melinda Gates Foundation HBGDki Initiative. Marco Valenzuela-Escárcega and Mihai Surdeanu declare a financial interest in lum.ai. This interest has been properly disclosed to the University of Arizona Institutional Review Committee and is managed in accordance with its conflict of interest policies.
Publisher Copyright:
© 2019 Association for Computational Linguistics.
PY - 2021
Y1 - 2021
N2 - We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i.e., use of limited annotations and interpretability of extraction patterns, with the robust learning approaches proposed in representation learning. Our algorithm iteratively learns custom embeddings for both the multi-word entities to be extracted and the patterns that match them from a few example entities per category. We demonstrate that this representation-based approach outperforms three other state-of-the-art bootstrapping approaches on two datasets: CoNLL-2003 and OntoNotes. Additionally, using these embeddings, our approach outputs a globally-interpretable model consisting of a decision list, by ranking patterns based on their proximity to the average entity embedding in a given class. We show that this interpretable model performs close to our complete bootstrapping model, proving that representation learning can be used to produce interpretable models with small loss in performance. This decision list can be edited by human experts to mitigate some of that loss and in some cases outperform the original model.
AB - We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i.e., use of limited annotations and interpretability of extraction patterns, with the robust learning approaches proposed in representation learning. Our algorithm iteratively learns custom embeddings for both the multi-word entities to be extracted and the patterns that match them from a few example entities per category. We demonstrate that this representation-based approach outperforms three other state-of-the-art bootstrapping approaches on two datasets: CoNLL-2003 and OntoNotes. Additionally, using these embeddings, our approach outputs a globally-interpretable model consisting of a decision list, by ranking patterns based on their proximity to the average entity embedding in a given class. We show that this interpretable model performs close to our complete bootstrapping model, proving that representation learning can be used to produce interpretable models with small loss in performance. This decision list can be edited by human experts to mitigate some of that loss and in some cases outperform the original model.
UR - http://www.scopus.com/inward/record.url?scp=85084229748&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85084229748&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85084229748
T3 - NLP@NAACL-HLT 2019 - 3rd Workshop on Structured Prediction for NLP, Proceedings
SP - 18
EP - 28
BT - NLP@NAACL-HLT 2019 - 3rd Workshop on Structured Prediction for NLP, Proceedings
PB - Association for Computational Linguistics (ACL)
T2 - 3rd Workshop on Structured Prediction for NLP, NLP@NAACL-HLT 2019
Y2 - 7 June 2019
ER -