Communication Dans Un Congrès Année : 2018

A comparison of character neural language model and bootstrapping for language identification in multilingual noisy texts

Résumé

This paper seeks to examine the effect of including background knowledge in the form of character pre-trained neural language model (LM), and data bootstrapping to overcome the problem of unbalanced limited resources. As a test, we explore the task of language identification in mixed-language short non-edited texts with an under-resourced language, namely the case of Algerian Arabic for which both labelled and unlabelled data are limited. We compare the performance of two traditional machine learning methods and a deep neural networks (DNNs) model. The results show that overall DNNs perform better on labelled data for the majority categories and struggle with the minority ones. While the effect of the untokenised and unlabelled data encoded as LM differs for each category, bootstrapping, however, improves the performance of all systems and all categories. These methods are language independent and could be generalized to other under-resourced languages for which a small labelled data and a larger unlabeled data are available.
Fichier principal
Vignette du fichier
SCLeM-2018_Adouane.pdf (215.21 Ko) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte
Licence
Domaine public

Dates et versions

cea-04572396 , version 1 (10-05-2024)

Licence

Domaine public

Identifiants

Citer

Wafia Adouane, Simon Dobnik, Jean-Philippe Bernardy, Nasredine Semmar. A comparison of character neural language model and bootstrapping for language identification in multilingual noisy texts. 2018 nSecond Workshop on Subword/Character LEvel Models, Association for Computational Linguistics, Jun 2018, New Orleans, United States. pp.22-31, ⟨10.18653/v1/W18-1203⟩. ⟨cea-04572396⟩
30 Consultations
20 Téléchargements

Altmetric

Partager

More