Interpretability Rules: Jointly Bootstrapping a Neural Relation Extractor with an Explanation Decoder

Zheng Tang, Mihai Surdeanu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We introduce a method that transforms a rulebased relation extraction (RE) classifier into a neural one such that both interpretability and performance are achieved. Our approach jointly trains a RE classifier with a decoder that generates explanations for these extractions, using as sole supervision a set of rules that match these relations. Our evaluation on the TACRED dataset shows that our neural RE classifier outperforms the rule-based one we started from by 9 F1 points; our decoder generates explanations with a high BLEU score of over 90%; and, the joint learning improves the performance of both the classifier and decoder.

Original languageEnglish (US)
Title of host publicationTrustNLP 2021 - 1st Workshop on Trustworthy Natural Language Processing, Proceedings of the Workshop
EditorsYada Pruksachatkun, Anil Ramakrishna, Kai-Wei Chang, Satyapriya Krishna, Jwala Dhamala, Tanaya Guha, Xiang Ren
PublisherAssociation for Computational Linguistics (ACL)
Pages1-7
Number of pages7
ISBN (Electronic)9781954085336
StatePublished - 2021
Event1st Workshop on Trustworthy Natural Language Processing, TrustNLP 2021 - Virtual, Online
Duration: Jun 10 2021 → …

Publication series

NameTrustNLP 2021 - 1st Workshop on Trustworthy Natural Language Processing, Proceedings of the Workshop

Conference

Conference1st Workshop on Trustworthy Natural Language Processing, TrustNLP 2021
CityVirtual, Online
Period6/10/21 → …

ASJC Scopus subject areas

  • Software
  • Language and Linguistics
  • Computational Theory and Mathematics
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Interpretability Rules: Jointly Bootstrapping a Neural Relation Extractor with an Explanation Decoder'. Together they form a unique fingerprint.

Cite this