Revisiting Syllables in Language Modelling and their Application on Low-Resource Machine Translation

Arturo Oncevay, Kervy Dante Rivas Rojas, Liz Karen Chavez Sanchez, Roberto Zariquiey

Producción científica: Contribución a una revistaArtículo de la conferenciarevisión exhaustiva

Resumen

Language modelling and machine translation tasks mostly use subword or character inputs, but syllables are seldom used. Syllables provide shorter sequences than characters, require less-specialised extracting rules than morphemes, and their segmentation is not impacted by the corpus size. In this study, we first explore the potential of syllables for open-vocabulary language modelling in 21 languages. We use rule-based syllabification methods for six languages and address the rest with hyphenation, which works as a syllabification proxy. With a comparable perplexity, we show that syllables outperform characters and other subwords. Moreover, we study the importance of syllables on neural machine translation for a non-related and low-resource language-pair (Spanish–Shipibo-Konibo). In pairwise and multilingual systems, syllables outperform unsupervised subwords, and further morphological segmentation methods, when translating into a highly synthetic language with a transparent orthography (Shipibo-Konibo). Finally, we perform some human evaluation, and discuss limitations and opportunities.

Idioma originalInglés
Páginas (desde-hasta)4258-4267
Número de páginas10
PublicaciónProceedings - International Conference on Computational Linguistics, COLING
Volumen29
N.º1
EstadoPublicada - 2022
Evento29th International Conference on Computational Linguistics, COLING 2022 - Gyeongju, República de Corea
Duración: 12 oct. 202217 oct. 2022

Huella

Profundice en los temas de investigación de 'Revisiting Syllables in Language Modelling and their Application on Low-Resource Machine Translation'. En conjunto forman una huella única.

Citar esto