Investigating Self-supervised Pre-training for End-to-end Speech Translation - Laboratoire Informatique de l'Université du Maine Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Investigating Self-supervised Pre-training for End-to-end Speech Translation

Résumé

Self-supervised learning from raw speech has been proven beneficial to improve automatic speech recognition (ASR). We investigate here its impact on end-to-end automatic speech translation (AST) performance. We use a contrastive predic-tive coding (CPC) model pre-trained from unlabeled speech as a feature extractor for a downstream AST task. We show that self-supervised pre-training is particularly efficient in low resource settings and that fine-tuning CPC models on the AST training data further improves performance. Even in higher resource settings, ensembling AST models trained with filter-bank and CPC representations leads to near state-of-the-art models without using any ASR pre-training. This might be particularly beneficial when one needs to develop a system that translates from speech in a language with poorly standardized orthography or even from speech in an unwritten language. Index Terms: self-supervised learning from speech, automatic speech translation, end-to-end models, low resource settings.
Fichier principal
Vignette du fichier
Paper_Template_for_INTERSPEECH_2019-3.pdf (461.68 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02962186 , version 1 (09-10-2020)

Identifiants

  • HAL Id : hal-02962186 , version 1

Citer

Ha Nguyen, Fethi Bougares, Natalia Tomashenko, Yannick Estève, Laurent Besacier. Investigating Self-supervised Pre-training for End-to-end Speech Translation. Interspeech 2020, Oct 2020, Shangai (Virtual Conf), China. ⟨hal-02962186⟩
429 Consultations
480 Téléchargements

Partager

Gmail Facebook X LinkedIn More