On the hidden negative transfer in sequential Transfer Learning for domain adaptation from news to tweets
Résumé
Transfer Learning has been shown to be a powerful tool for Natural Language Processing (NLP) and has outperformed the standard supervised learning paradigm, as it takes benefit from the pre-learned knowledge. Nevertheless, when transfer is performed between less related domains, it brings a negative transfer, i.e. it hurts the transfer performance. In this research, we shed light on the hidden negative transfer occurring when transferring from the News domain to the Tweets domain, through quantitative and qualitative analysis. Our experiments on three NLP tasks: Part-Of-Speech tagging, Chunking and Named Entity recognition reveal interesting insights.
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|---|
Licence |
Domaine public
|