TY - GEN
T1 - Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas
AU - Mager, Manuel
AU - Oncevay, Arturo
AU - Ebrahimi, Abteen
AU - Ortega, John
AU - Rios, Annette
AU - Fan, Angela
AU - Gutierrez-Vasques, Ximena
AU - Chiruzzo, Luis
AU - Giménez-Lugo, Gustavo A.
AU - Ramos, Ricardo
AU - Ruiz, Ivan Vladimir Meza
AU - Coto-Solano, Rolando
AU - Palmer, Alexis
AU - Mager, Elisabeth
AU - Chaudhary, Vishrav
AU - Neubig, Graham
AU - Vu, Ngoc Thang
AU - Kann, Katharina
N1 - Publisher Copyright:
© 2021 Association for Computational Linguistics
PY - 2021
Y1 - 2021
N2 - This paper presents the results of the 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas. The shared task featured two independent tracks, and participants submitted machine translation systems for up to 10 indigenous languages. Overall, 8 teams participated with a total of 214 submissions. We provided training sets consisting of data collected from various sources, as well as manually translated sentences for the development and test sets. An official baseline trained on this data was also provided. Team submissions featured a variety of architectures, including both statistical and neural models, and for the majority of languages, many teams were able to considerably improve over the baseline. The best performing systems achieved 12.97 ChrF higher than baseline, when averaged across languages.
AB - This paper presents the results of the 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas. The shared task featured two independent tracks, and participants submitted machine translation systems for up to 10 indigenous languages. Overall, 8 teams participated with a total of 214 submissions. We provided training sets consisting of data collected from various sources, as well as manually translated sentences for the development and test sets. An official baseline trained on this data was also provided. Team submissions featured a variety of architectures, including both statistical and neural models, and for the majority of languages, many teams were able to considerably improve over the baseline. The best performing systems achieved 12.97 ChrF higher than baseline, when averaged across languages.
UR - http://www.scopus.com/inward/record.url?scp=85115703569&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85115703569
T3 - Proceedings of the 1st Workshop on Natural Language Processing for Indigenous Languages of the Americas, AmericasNLP 2021
SP - 202
EP - 217
BT - Proceedings of the 1st Workshop on Natural Language Processing for Indigenous Languages of the Americas, AmericasNLP 2021
A2 - Mager, Manuel
A2 - Oncevay, Arturo
A2 - Rios, Annette
A2 - Ruiz, Ivan Vladimir Meza
A2 - Palmer, Alexis
A2 - Neubig, Graham
A2 - Kann, Katharina
PB - Association for Computational Linguistics (ACL)
T2 - 1st Workshop on Natural Language Processing for Indigenous Languages of the Americas, AmericasNLP 2021
Y2 - 11 June 2021
ER -