Do pretrained transformers infer telicity like humans?

Yiyun Zhao, Jian Gang Ngui, Lucy Hall Hartley, Steven Bethard

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Pretrained transformer-based language models achieve state-of-the-art performance in many NLP tasks, but it is an open question whether the knowledge acquired by the models during pretraining resembles the linguistic knowledge of humans. We present both humans and pretrained transformers with descriptions of events, and measure their preference for telic interpretations (the event has a natural endpoint) or atelic interpretations (the event does not have a natural endpoint). To measure these preferences and determine what factors influence them, we design an English test and a novel-word test that include a variety of linguistic cues (noun phrase quantity, resultative structure, contextual information, temporal units) that bias toward certain interpretations. We find that humans’ choice of telicity interpretation is reliably influenced by theoretically-motivated cues, transformer models (BERT and RoBERTa) are influenced by some (though not all) of the cues, and transformer models often rely more heavily on temporal units than humans do.

Original languageEnglish (US)
Title of host publicationCoNLL 2021 - 25th Conference on Computational Natural Language Learning, Proceedings
EditorsArianna Bisazza, Omri Abend
PublisherAssociation for Computational Linguistics (ACL)
Pages72-81
Number of pages10
ISBN (Electronic)9781955917056
StatePublished - 2021
Event25th Conference on Computational Natural Language Learning, CoNLL 2021 - Virtual, Online
Duration: Nov 10 2021Nov 11 2021

Publication series

NameCoNLL 2021 - 25th Conference on Computational Natural Language Learning, Proceedings

Conference

Conference25th Conference on Computational Natural Language Learning, CoNLL 2021
CityVirtual, Online
Period11/10/2111/11/21

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Do pretrained transformers infer telicity like humans?'. Together they form a unique fingerprint.

Cite this