Skip to main navigation Skip to search Skip to main content

Serialized Output Training by Learned Dominance

  • Ying Shi
  • , Lantian Li
  • , Shi Yin
  • , Dong Wang*
  • , Jiqing Han*
  • *Corresponding author for this work
  • School of Computer Science and Technology, Harbin Institute of Technology
  • Beijing University of Posts and Telecommunications
  • Huawei Technologies Co., Ltd.
  • Tsinghua University

Research output: Contribution to journalConference articlepeer-review

Abstract

Serialized Output Training (SOT) has showcased state-of-the-art performance in multi-talker speech recognition by sequentially decoding the speech of individual speakers. To address the challenging label-permutation issue, prior methods have relied on either the Permutation Invariant Training (PIT) or the time-based First-In-First-Out (FIFO) rule. This study presents a model-based serialization strategy that incorporates an auxiliary module into the Attention Encoder-Decoder architecture, autonomously identifying the crucial factors to order the output sequence of the speech components in multi-talker speech. Experiments conducted on the LibriSpeech and LibriMix databases reveal that our approach significantly outperforms the PIT and FIFO baselines in both 2-mix and 3-mix scenarios. Further analysis shows that the serialization module identifies dominant speech components in a mixture by factors including loudness and gender, and orders speech components based on the dominance score.

Original languageEnglish
Pages (from-to)712-716
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
DOIs
StatePublished - 2024
Externally publishedYes
Event25th Interspeech Conferece 2024 - Kos Island, Greece
Duration: 1 Sep 20245 Sep 2024

Keywords

  • dominant speech
  • multi-talker speech recognition
  • serialized output training

Fingerprint

Dive into the research topics of 'Serialized Output Training by Learned Dominance'. Together they form a unique fingerprint.

Cite this