Selectional preferences for semantic role classification

Bẽnat Zapirain, Eneko Agirre, Llúis Marquez, Mihai Surdeanu

Research output: Contribution to journalArticlepeer-review

30 Scopus citations

Abstract

This paper focuses on a well-known open issue in Semantic Role Classification (SRC) research: the limited influence and sparseness of lexical features. We mitigate this problem using models that integrate automatically learned selectional preferences (SP). We explore a range of models based onWordNet and distributional-similarity SPs. Furthermore, we demonstrate that the SRC task is better modeled by SP models centered on both verbs and prepositions, rather than verbs alone. Our experiments with SP-based models in isolation indicate that they outperform a lexical baseline with 20 F1 points in domain and almost 40 F1 points out of domain. Furthermore, we show that a state-of-the-art SRC system extended with features based on selectional preferences performs significantly better, both in domain (17% error reduction) and out of domain (13% error reduction). Finally, we show that in an end-to-end semantic role labeling system we obtain small but statistically significant improvements, even though our modified SRC model affects only approximately 4% of the argument candidates. Our post hoc error analysis indicates that the SP-based features help mostly in situations where syntactic information is either incorrect or insufficient to disambiguate the correct role.

Original languageEnglish (US)
Pages (from-to)631-663
Number of pages33
JournalComputational Linguistics
Volume39
Issue number3
DOIs
StatePublished - Sep 2013

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language
  • Computer Science Applications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Selectional preferences for semantic role classification'. Together they form a unique fingerprint.

Cite this