ConvAI3: Generating clarifying questions for open-domain dialogue systems (ClariQ)
arXiv preprint arXiv:2009.11352, 2020•arxiv.org
This document presents a detailed description of the challenge on clarifying questions for
dialogue systems (ClariQ). The challenge is organized as part of the Conversational AI
challenge series (ConvAI3) at Search Oriented Conversational AI (SCAI) EMNLP workshop
in 2020. The main aim of the conversational systems is to return an appropriate answer in
response to the user requests. However, some user requests might be ambiguous. In IR
settings such a situation is handled mainly thought the diversification of the search result …
dialogue systems (ClariQ). The challenge is organized as part of the Conversational AI
challenge series (ConvAI3) at Search Oriented Conversational AI (SCAI) EMNLP workshop
in 2020. The main aim of the conversational systems is to return an appropriate answer in
response to the user requests. However, some user requests might be ambiguous. In IR
settings such a situation is handled mainly thought the diversification of the search result …
This document presents a detailed description of the challenge on clarifying questions for dialogue systems (ClariQ). The challenge is organized as part of the Conversational AI challenge series (ConvAI3) at Search Oriented Conversational AI (SCAI) EMNLP workshop in 2020. The main aim of the conversational systems is to return an appropriate answer in response to the user requests. However, some user requests might be ambiguous. In IR settings such a situation is handled mainly thought the diversification of the search result page. It is however much more challenging in dialogue settings with limited bandwidth. Therefore, in this challenge, we provide a common evaluation framework to evaluate mixed-initiative conversations. Participants are asked to rank clarifying questions in an information-seeking conversations. The challenge is organized in two stages where in Stage 1 we evaluate the submissions in an offline setting and single-turn conversations. Top participants of Stage 1 get the chance to have their model tested by human annotators.
arxiv.org