The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics

Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Aremu Anuoluwapo, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna Clinciu, Dipanjan Das, Kaustubh D. Dhole, Wanyu Du, Esin Durmus, Ondřej Dušek, Chris Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh JhamtaniYangfeng Ji, Shailza Jolly, Mihir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Rubungo Andre Niyongabo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, João Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, Jiawei Zhou

Producción científica: Capítulo del libro/informe/acta de congresoContribución a la conferenciarevisión exhaustiva

166 Citas (Scopus)

Resumen

We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with well-established, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for which we are organizing a shared task at our ACL 2021 Workshop and to which we invite the entire NLG community to participate.

Idioma originalInglés
Título de la publicación alojadaGEM 2021 - 1st Workshop on Natural Language Generation, Evaluation, and Metrics, Proceedings
EditoresAntoine Bosselut, Esin Durmus, Varun Prashant Gangal, Sebastian Gehrmann, Yacine Jernite, Laura Perez-Beltrachini, Samira Shaikh, Wei Xu
EditorialAssociation for Computational Linguistics (ACL)
Páginas96-120
Número de páginas25
ISBN (versión digital)9781954085671
EstadoPublicada - 2021
Publicado de forma externa
Evento1st Workshop on Natural Language Generation, Evaluation, and Metrics, GEM 2021 - Virtual, Online, Tailandia
Duración: 5 ago. 20216 ago. 2021

Serie de la publicación

NombreGEM 2021 - 1st Workshop on Natural Language Generation, Evaluation, and Metrics, Proceedings

Conferencia

Conferencia1st Workshop on Natural Language Generation, Evaluation, and Metrics, GEM 2021
País/TerritorioTailandia
CiudadVirtual, Online
Período5/08/216/08/21

Huella

Profundice en los temas de investigación de 'The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics'. En conjunto forman una huella única.

Citar esto