Welcome to the SLUE Benchmark¶
╓▄▄▄
▐██╫██
██▌ ██─ ╓████
▓█▀█▓ ██─ ██▌ ██▀╟██ ▄▄╗▄▄
╓██─ ▀██ ▐██ ▐██ ▐██ ██▄ ██▌ ╟██ ▄▄▄═╗▄
╙▀▀▀╙ ██▌ ██▌ ██ ██▌ ╟██ ▐██ ▀██▄██▀ ─
▀███▀ ██▌ ██ ╙███▀─
▐SLUE▌
╙▀▀▀▀
We are pleased to present the Spoken Language Understanding Evaluation (SLUE) benchmark, aimed at accomplishing the following objectives:
Track research progress on multiple SLU tasks
Facilitate the development of pre-trained representations by providing fine-tuning and eval sets for a variety of SLU tasks
Foster the open exchange of research by focusing on freely available datasets that all academic and industrial groups can easily use
For this benchmark, we provide:
New annotation of publicly available, natural speech data for training and evaluation
A benchmark suite including code to download and pre-process the SLUE datasets, train the baseline models, and evaluate performance on SLUE tasks
Refer to Toolkit and Paper for more details.
SLUE committees¶
Suwon Shon - ASAPP
Felix Wu - ASAPP
Ankita Pasad - TTIC
Chyi-Jiunn Lin - NTU
Siddhant Arora - CMU
Roshan Sharma - CMU
Wei-Lun Wu - NTU
Hung-Yi Lee - NTU
Karen Livescu - TTIC
Shinji Watanabe - CMU
Questions and issues¶
For open discussion, we will use GitHub issue page as our official Q&A boards.
Logo of the SLUE¶
It is important to note that the SLUE logo does not depict a snake; rather, it is an audio waveform in time-domain drawn with ASCII characters. We believe this text-drawn waveform appropriately represents what SLUE stands for, and so we used a generator to create the image and tweaked it slightly.