TY - GEN
T1 - Tell me why
T2 - 21st Conference on Computational Natural Language Learning, CoNLL 2017
AU - Sharp, Rebecca
AU - Surdeanu, Mihai
AU - Jansen, Peter
AU - Valenzuela-Escárcega, Marco A.
AU - Clark, Peter
AU - Hammond, Michael
N1 - Publisher Copyright:
© 2017 Association for Computational Linguistics.
PY - 2017
Y1 - 2017
N2 - For many applications of question answering (QA), being able to explain why a given model chose an answer is critical. However, the lack of labeled data for answer justifications makes learning this difficult and expensive. Here we propose an approach that uses answer ranking as distant supervision for learning how to select informative justifications, where justifications serve as inferential connections between the question and the correct answer while often containing little lexical overlap with either. We propose a neural network architecture for QA that reranks answer justifications as an intermediate (and human-interpretable) step in answer selection. Our approach is informed by a set of features designed to combine both learned representations and explicit features to capture the connection between questions, answers, and answer justifications. We show that with this end-to-end approach we are able to significantly improve upon a strong IR baseline in both justification ranking (+9% rated highly relevant) and answer selection (+6% P@1).
AB - For many applications of question answering (QA), being able to explain why a given model chose an answer is critical. However, the lack of labeled data for answer justifications makes learning this difficult and expensive. Here we propose an approach that uses answer ranking as distant supervision for learning how to select informative justifications, where justifications serve as inferential connections between the question and the correct answer while often containing little lexical overlap with either. We propose a neural network architecture for QA that reranks answer justifications as an intermediate (and human-interpretable) step in answer selection. Our approach is informed by a set of features designed to combine both learned representations and explicit features to capture the connection between questions, answers, and answer justifications. We show that with this end-to-end approach we are able to significantly improve upon a strong IR baseline in both justification ranking (+9% rated highly relevant) and answer selection (+6% P@1).
UR - http://www.scopus.com/inward/record.url?scp=85051507685&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85051507685&partnerID=8YFLogxK
U2 - 10.18653/v1/k17-1009
DO - 10.18653/v1/k17-1009
M3 - Conference contribution
AN - SCOPUS:85051507685
T3 - CoNLL 2017 - 21st Conference on Computational Natural Language Learning, Proceedings
SP - 69
EP - 79
BT - CoNLL 2017 - 21st Conference on Computational Natural Language Learning, Proceedings
PB - Association for Computational Linguistics (ACL)
Y2 - 3 August 2017 through 4 August 2017
ER -