Mihir Kale and Abhinav Rastogi. 2020b. Text-to-text
pre-training for data-to-text tasks. In Proceedings of
the 13th International Conference on Natural Lan-
guage Generation, pages 97–102.
Mihir Kale and Scott Roy. 2020. Machine translation
pre-training for data-to-text generation–a case study
in czech. arXiv preprint arXiv:2004.02077.
Jared Kaplan, Sam McCandlish, Tom Henighan,
Tom B Brown, Benjamin Chess, Rewon Child, Scott
Gray, Alec Radford, Jeffrey Wu, and Dario Amodei.
2020. Scaling laws for neural language models.
arXiv preprint arXiv:2001.08361.
Daniel Keysers, Nathanael Schärli, Nathan Scales,
Hylke Buisman, Daniel Furrer, Sergii Kashubin,
Nikola Momchev, Danila Sinopalnikov, Lukasz
Stafiniak, Tibor Tihon, et al. 2020. Measuring com-
positional generalization: A comprehensive method
on realistic data. In International Conference on
Learning Representations.
Najoung Kim and Tal Linzen. 2020. Cogs: A composi-
tional generalization challenge based on semantic in-
terpretation. In Proceedings of the 2020 Conference
on Empirical Methods in Natural Language Process-
ing (EMNLP), pages 9087–9105.
Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin
Choi, and Luke Zettlemoyer. 2017. Neural amr:
Sequence-to-sequence models for parsing and gener-
ation. In Proceedings of the 55th Annual Meeting of
the Association for Computational Linguistics (Vol-
ume 1: Long Papers), pages 146–157.
Brenden M Lake. 2019. Compositional generalization
through meta sequence-to-sequence learning. Ad-
vances in Neural Information Processing Systems,
32:9791–9801.
Xintong Li, Symon Stevens-Guille, Aleksandre
Maskharashvili, and Michael White. 2021. Self-
training for compositional neural nlg in task-
oriented dialogue. In Proceedings of the 14th
International Conference on Natural Language
Generation, pages 87–102.
Diego Marcheggiani and Laura Perez-Beltrachini.
2018. Deep graph convolutional encoders for struc-
tured data to text generation. In Proceedings of the
11th International Conference on Natural Language
Generation, pages 1–9.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and
Ryan McDonald. 2020. On faithfulness and factu-
ality in abstractive summarization. In Proceedings
of the 58th Annual Meeting of the Association for
Computational Linguistics, pages 1906–1919.
Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019.
Step-by-step: Separating planning from realization
in neural data-to-text generation. In Proceedings of
the 2019 Conference of the North American Chap-
ter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
and Short Papers), pages 2267–2277.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-
Jing Zhu. 2002. Bleu: a method for automatic eval-
uation of machine translation. In Proceedings of the
40th annual meeting of the Association for Compu-
tational Linguistics, pages 311–318.
Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun
Li, Jinchao Li, Michael Zeng, and Jianfeng Gao.
2020. Few-shot natural language generation for
task-oriented dialog. In Proceedings of the 2020
Conference on Empirical Methods in Natural Lan-
guage Processing: Findings, pages 172–182.
Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt
Gardner, Christopher Clark, Kenton Lee, and Luke
Zettlemoyer. 2018. Deep contextualized word repre-
sentations. In NAACL-HLT.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, and Peter J Liu. 2020. Exploring the lim-
its of transfer learning with a unified text-to-text
transformer. Journal of Machine Learning Research,
21(140):1–67.
Jinfeng Rao, Kartikeya Upasani, Anusha Balakrish-
nan, Michael White, Anuj Kumar, and Rajen Subba.
2019. A tree-to-sequence model for neural nlg in
task-oriented dialog. In Proceedings of the 12th In-
ternational Conference on Natural Language Gener-
ation, pages 95–100.
Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara,
Raghav Gupta, and Pranav Khaitan. 2020. Towards
scalable multi-domain conversational agents: The
schema-guided dialogue dataset. In Proceedings of
the AAAI Conference on Artificial Intelligence, 05,
pages 8689–8696.
Ehud Reiter and Robert Dale. 2000. Building natural
language generation systems. Cambridge university
press.
Henry Scudder. 1965. Probability of error of some
adaptive pattern-recognition machines. IEEE Trans-
actions on Information Theory, 11(3):363–371.
Thibault Sellam, Dipanjan Das, and Ankur Parikh.
2020. Bleurt: Learning robust metrics for text gen-
eration. In Proceedings of the 58th Annual Meet-
ing of the Association for Computational Linguistics,
pages 7881–7892.
Noam Shazeer and Mitchell Stern. 2018. Adafactor:
Adaptive learning rates with sublinear memory cost.
In International Conference on Machine Learning,
pages 4596–4604. PMLR.
Xiaoyu Shen, Ernie Chang, Hui Su, Cheng Niu, and
Dietrich Klakow. 2020. Neural data-to-text genera-
tion via jointly learning the segmentation and corre-
spondence. In Proceedings of the 58th Annual Meet-
ing of the Association for Computational Linguistics,
pages 7155–7165.