PUBLICATIONS

Abstract

NLM_NIH at SemEval-2017 Task 3: from Question Entailment to Question Similarity for Community Question Answering.


Ben Abacha A, Demner-Fushman D

In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval@ACL) 2017: 349-352.

Abstract:

This paper describes our participation in SemEval-2017 Task 3 on Community Question Answering (cQA). The Question Similarity subtask (B) aims to rank a set of related questions retrieved by a search engine according to their similarity to the original question. We adapted our feature-based system for Recognizing Question Entailment (RQE) to the question similarity task. Tested on cQA-B-2016 test data, our RQE system outperformed the best system of the 2016 challenge in all measures with 77.47 MAP and 80.57 Accuracy. On cQA-B-2017 test data, performances of all systems dropped by around 30 points. Our primary system obtained 44.62 MAP, 67.27 Accuracy and 47.25 F1 score. The cQA-B-2017 best system achieved 47.22 MAP and 42.37 F1 score. Our system is ranked sixth in terms of MAP and third in terms of F1 out of 13 participating teams.


Ben Abacha A, Demner-Fushman D. NLM_NIH at SemEval-2017 Task 3: from Question Entailment to Question Similarity for Community Question Answering. 
In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval@ACL) 2017: 349-352.

PDF