Interspeech 2023 Special Session
“SLUE 2023: Low-resource Spoken Language Understanding Evaluation Challenge”¶
Thanks to shared datasets and benchmarks, impressive advancements have been made in the field of speech processing. Historically, these tasks have been centered around automatic speech recognition (ASR), speaker identification, and other key activities at the lower level tasks.
There is an increasing demand for advanced spoken language understanding (SLU) tasks, including using end-to-end models, but there are not many labeled datasets available to tackle them. What’s more, the few existing datasets tend to be relatively limited in size and comprise synthetic data. Recent research reveals that it is possible to pre-train generic representations and then refine them for various tasks with only a small amount of labeled data.
For this special session, we will provide a Spoken Language Understanding Evaluation (SLUE) benchmark suite. SLUE (Phase 1) includes annotation for ASR, named entity recognition (NER), and sentiment analysis with the toolkit to pre-process and fine-tune scripts for baseline models.
While we invite general submissions about this topic, the 2nd special session of the low-resource SLU series incorporates a unique challenge - SLUE 2023 - which will focus on named entity recognition using the SLUE-Vox Populi dataset with resource constraints that can be found here.
We also invite contributions for any relevant work in low-resource SLU problems, which include, but are not limited to:
Training/fine-tuning approach using self/semi-supervised model for SLU tasks
Comparison between pipeline and end-to-end SLU systems
Self/semi-supervised learning approach focusing on SLU
Multi-task/transfer/student-teacher learning focusing on SLU tasks
Theoretical or empirical study on low-resource SLU problems
SLUE 2023 Challenge Resources
Please check here
General Resources
For this special session, we will provide support for several benchmark tasks using the new Spoken Language Understanding Evaluation (SLUE) benchmark suite (https://arxiv.org/abs/2111.10367). SLUE includes annotation for ASR, NER and sentiment analysis. We also provide a toolkit to pre-process and fine-tune scripts for baseline models. It is not mandatory for submissions to use SLUE, but we offer it as a well-defined experiment setting for low-resource SLU.
SLUE Dataset: \
SLUE Toolkit: Github repo
SLUE Website: https://asappresearch.github.io/slue-toolkit
Note that there is no limitation on use of datasets/benchmarks for the special session. The other datasets/benchmarks we recommend are:
SUPERB (limited to SLU-related tasks)
Paper submission
Papers for Interspeech Special Session must be submitted following the same schedule and procedure as regular papers of INTERSPEECH 2023. The submitted papers will undergo the same review process by anonymous and independent reviewers.
Submission URL : (TBA)
Important dates
(SLUE-Voxpopuli v0.2 dataset has been released through SLUE-Toolkit since 2022)
Participants registration start : Jan. 01, 2023
Result submission deadline : Feb. 20, 2023
Result announcement (rankings) : Feb. 22, 2023
Interspeech 2023 paper submission deadline : Mar. 01, 2023
Interspeech 2023 paper update deadline : Mar. 08, 2023
Interspeech 2023 @ Dublin, Ireland : Aug. 20-24, 2023