Comparing Self-Supervised Pre-Training and Semi-Supervised Training for Speech Recognition in Languages with Weak Language Models - Université Paris-Saclay Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Comparing Self-Supervised Pre-Training and Semi-Supervised Training for Speech Recognition in Languages with Weak Language Models

Résumé

This paper investigates the potential of improving a hybrid automatic speech recognition model trained on 10 hours of transcribed data with 200 hours of untranscribed data in lowresource languages. First, we compare baseline methods of cross-lingual transfer with MFCC features and features extracted with the multilingual self-supervised model XLSR-53. Subsequently, we compare two approaches that can leverage the untranscribed data: semi-supervised training with LF-MMI and continued self-supervised pre-training of XLSR-53. Our results on well-resourced English broadcast data derived from MGB show that both methods achieve 18% and 27% relative improvements compared to the baseline, respectively. On the low-resource South African Soap Opera dataset, the relative improvement with semi-supervised training is only 3% due to the inherently weak language model. However, continued pretraining achieves 8.6% relative improvement because it does not rely on any external information.
Fichier principal
Vignette du fichier
lamyeemui23_interspeech.pdf (222.82 Ko) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-04227249 , version 1 (03-10-2023)

Identifiants

Citer

Léa-Marie Lam-Yee-Mui, Lucas Ondel Yang, Ondřej Klejch. Comparing Self-Supervised Pre-Training and Semi-Supervised Training for Speech Recognition in Languages with Weak Language Models. INTERSPEECH 2023, Aug 2023, Dublin, Ireland. pp.87-91, ⟨10.21437/interspeech.2023-1802⟩. ⟨hal-04227249⟩
48 Consultations
31 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More