Language models are multilingual chain-of-thought reasoners

F Shi, M Suzgun, M Freitag, X Wang, S Srivats… - arXiv preprint arXiv …, 2022 - arxiv.org
We evaluate the reasoning abilities of large language models in multilingual settings. We
introduce the Multilingual Grade School Math (MGSM) benchmark, by manually translating
250 grade-school math problems from the GSM8K dataset (Cobbe et al., 2021) into ten
typologically diverse languages. We find that the ability to solve MGSM problems via chain-
of-thought prompting emerges with increasing model scale, and that models have strikingly
strong multilingual reasoning abilities, even in underrepresented languages such as …

[引言][C] Language models are multilingual chain-of-thought reasoners, 2022

F Shi, M Suzgun, M Freitag, X Wang, S Srivats… - URL https://arxiv. org/abs …
顯示最佳搜尋結果。 查看所有結果