Is ChatGPT a Good Multi-Party Conversation Solver?

CH Tan, JC Gu, ZH Ling - arXiv preprint arXiv:2310.16301, 2023 - arxiv.org
arXiv preprint arXiv:2310.16301, 2023arxiv.org
Large Language Models (LLMs) have emerged as influential instruments within the realm of
natural language processing; nevertheless, their capacity to handle multi-party
conversations (MPCs)--a scenario marked by the presence of multiple interlocutors involved
in intricate information exchanges--remains uncharted. In this paper, we delve into the
potential of generative LLMs such as ChatGPT and GPT-4 within the context of MPCs. An
empirical analysis is conducted to assess the zero-shot learning capabilities of ChatGPT …
Large Language Models (LLMs) have emerged as influential instruments within the realm of natural language processing; nevertheless, their capacity to handle multi-party conversations (MPCs) -- a scenario marked by the presence of multiple interlocutors involved in intricate information exchanges -- remains uncharted. In this paper, we delve into the potential of generative LLMs such as ChatGPT and GPT-4 within the context of MPCs. An empirical analysis is conducted to assess the zero-shot learning capabilities of ChatGPT and GPT-4 by subjecting them to evaluation across three MPC datasets that encompass five representative tasks. The findings reveal that ChatGPT's performance on a number of evaluated MPC tasks leaves much to be desired, whilst GPT-4's results portend a promising future. Additionally, we endeavor to bolster performance through the incorporation of MPC structures, encompassing both speaker and addressee architecture. This study provides an exhaustive evaluation and analysis of applying generative LLMs to MPCs, casting a light upon the conception and creation of increasingly effective and robust MPC agents. Concurrently, this work underscores the challenges implicit in the utilization of LLMs for MPCs, such as deciphering graphical information flows and generating stylistically consistent responses.
arxiv.org