Combining Extraction and Generation for Constructing Belief-Consequence Causal Links

Maria Alexeeva, Allegra A. Beal Cohen, Mihai Surdeanu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we introduce and justify a new task—causal link extraction based on beliefs—and do a qualitative analysis of the ability of a large language model—InstructGPT-3—to generate implicit consequences of beliefs. With the language model-generated consequences being promising, but not consistent, we propose directions of future work, including data collection, explicit consequence extraction using rule-based and language modeling-based approaches, and using explicitly stated consequences of beliefs to fine-tune or prompt the language model to produce outputs suitable for the task.

Original languageEnglish (US)
Title of host publicationInsights 2022 - 3rd Workshop on Insights from Negative Results in NLP, Proceedings of the Workshop
EditorsShabnam Tafreshi, Joao Sedoc, Anna Rogers, Aleksandr Drozd, Anna Rumshisky, Arjun Reddy Akula
PublisherAssociation for Computational Linguistics (ACL)
Pages159-164
Number of pages6
ISBN (Electronic)9781955917407
StatePublished - 2022
Event3rd Workshop on Insights from Negative Results in NLP, Insights 2022 - Dublin, Ireland
Duration: May 26 2022 → …

Publication series

NameInsights 2022 - 3rd Workshop on Insights from Negative Results in NLP, Proceedings of the Workshop

Conference

Conference3rd Workshop on Insights from Negative Results in NLP, Insights 2022
Country/TerritoryIreland
CityDublin
Period5/26/22 → …

ASJC Scopus subject areas

  • Language and Linguistics
  • Computational Theory and Mathematics
  • Computer Science Applications
  • Software

Fingerprint

Dive into the research topics of 'Combining Extraction and Generation for Constructing Belief-Consequence Causal Links'. Together they form a unique fingerprint.

Cite this