A dual task learning approach to fine-tune a multilingual semantic speech encoder for Spoken Language Understanding - Laboratoire Interdisciplinaire des Sciences du Numérique
Conference Papers Year : 2024

A dual task learning approach to fine-tune a multilingual semantic speech encoder for Spoken Language Understanding

Abstract

Self-Supervised Learning is vastly used to efficiently represent speech for Spoken Language Understanding, gradually replacing conventional approaches. Meanwhile, textual SSL models are proposed to encode language-agnostic semantics. SAMU-XLSR framework employed this semantic information to enrich multilingual speech representations. A recent study investigated SAMU-XLSR in-domain semantic enrichment by specializing it on downstream transcriptions, leading to state-of-the-art results on a challenging SLU task. This study's interest lies in the loss of multilingual performances and lack of specific-semantics training induced by such specialization in close languages without any SLU implication. We also consider SAMU-XLSR's loss of initial cross-lingual abilities due to a separate SLU fine-tuning. Therefore, this paper proposes a dual task learning approach to improve SAMU-XLSR semantic enrichment while considering distant languages for multilingual and language portability experiments.
Fichier principal
Vignette du fichier
Interspeech_2024_Dual_specialization-3.pdf (464.08 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04615074 , version 1 (18-06-2024)

Identifiers

  • HAL Id : hal-04615074 , version 1

Cite

Gaëlle Laperrière, Sahar Ghannay, Bassam Jabaian, Yannick Estève. A dual task learning approach to fine-tune a multilingual semantic speech encoder for Spoken Language Understanding. Interspeech 2024, Sep 2024, Kos, Greece. ⟨hal-04615074⟩
79 View
35 Download

Share

More