Exploring Text Representations for Generative Temporal Relation Extraction

Dmitriy Dligach, Timothy Miller, Steven Bethard, Guergana Savova

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

Sequence-to-sequence models are appealing because they allow both encoder and decoder to be shared across many tasks by formulating those tasks as text-to-text problems. Despite recently reported successes of such models, we find that engineering input/output representations for such text-to-text models is challenging. On the Clinical TempEval 2016 relation extraction task, the most natural choice of output representations, where relations are spelled out in simple predicate logic statements, did not lead to good performance. We explore a variety of input/output representations, with the most successful prompting one event at a time, and achieving results competitive with standard pairwise temporal relation extraction systems.

Original languageEnglish (US)
Title of host publicationClinicalNLP 2022 - 4th Workshop on Clinical Natural Language Processing, Proceedings
EditorsTristan Naumann, Steven Bethard, Kirk Roberts, Anna Rumshisky
PublisherAssociation for Computational Linguistics (ACL)
Pages109-113
Number of pages5
ISBN (Electronic)9781955917773
StatePublished - 2022
Event4th Workshop on Clinical Natural Language Processing, ClinicalNLP 2022 - Seattle, United States
Duration: Jul 14 2022 → …

Publication series

NameClinicalNLP 2022 - 4th Workshop on Clinical Natural Language Processing, Proceedings

Conference

Conference4th Workshop on Clinical Natural Language Processing, ClinicalNLP 2022
Country/TerritoryUnited States
CitySeattle
Period7/14/22 → …

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software

Fingerprint

Dive into the research topics of 'Exploring Text Representations for Generative Temporal Relation Extraction'. Together they form a unique fingerprint.

Cite this