TY - JOUR
T1 - Selectional preferences for semantic role classification
AU - Zapirain, Bẽnat
AU - Agirre, Eneko
AU - Marquez, Llúis
AU - Surdeanu, Mihai
N1 - Funding Information:
Acknowledgments. This paper has been funded by research grants from CONACyT, Grant 81965, and from PAPIIT–UNAM, Grant 104408.
Funding Information:
This paper has been funded by research grants from CONACyT, Grant 81965, and from PAPIIT–UNAM, Grant 104408.
PY - 2013/9
Y1 - 2013/9
N2 - This paper focuses on a well-known open issue in Semantic Role Classification (SRC) research: the limited influence and sparseness of lexical features. We mitigate this problem using models that integrate automatically learned selectional preferences (SP). We explore a range of models based onWordNet and distributional-similarity SPs. Furthermore, we demonstrate that the SRC task is better modeled by SP models centered on both verbs and prepositions, rather than verbs alone. Our experiments with SP-based models in isolation indicate that they outperform a lexical baseline with 20 F1 points in domain and almost 40 F1 points out of domain. Furthermore, we show that a state-of-the-art SRC system extended with features based on selectional preferences performs significantly better, both in domain (17% error reduction) and out of domain (13% error reduction). Finally, we show that in an end-to-end semantic role labeling system we obtain small but statistically significant improvements, even though our modified SRC model affects only approximately 4% of the argument candidates. Our post hoc error analysis indicates that the SP-based features help mostly in situations where syntactic information is either incorrect or insufficient to disambiguate the correct role.
AB - This paper focuses on a well-known open issue in Semantic Role Classification (SRC) research: the limited influence and sparseness of lexical features. We mitigate this problem using models that integrate automatically learned selectional preferences (SP). We explore a range of models based onWordNet and distributional-similarity SPs. Furthermore, we demonstrate that the SRC task is better modeled by SP models centered on both verbs and prepositions, rather than verbs alone. Our experiments with SP-based models in isolation indicate that they outperform a lexical baseline with 20 F1 points in domain and almost 40 F1 points out of domain. Furthermore, we show that a state-of-the-art SRC system extended with features based on selectional preferences performs significantly better, both in domain (17% error reduction) and out of domain (13% error reduction). Finally, we show that in an end-to-end semantic role labeling system we obtain small but statistically significant improvements, even though our modified SRC model affects only approximately 4% of the argument candidates. Our post hoc error analysis indicates that the SP-based features help mostly in situations where syntactic information is either incorrect or insufficient to disambiguate the correct role.
UR - http://www.scopus.com/inward/record.url?scp=84881176433&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84881176433&partnerID=8YFLogxK
U2 - 10.1162/COLI_a_00145
DO - 10.1162/COLI_a_00145
M3 - Article
AN - SCOPUS:84881176433
SN - 0891-2017
VL - 39
SP - 631
EP - 663
JO - Computational Linguistics
JF - Computational Linguistics
IS - 3
ER -