Welcome to the SLUE Benchmark




                         
                        ╫█
                        █▌ ██─   
               ▓    ─ ▌   █    ╗▄
             ─ █  █  █  █  ██▄  ▌ █   ▄    
          ╙    ▌ ▌   ██  ▌  █ █   ██▀    
                    ██▀    ▌  ██    
                            SLUE
                              ▀▀



We introduce the Spoken Language Understanding Evaluation (SLUE) benchmark. The goals of our work are to

  • Track research progress on multiple SLU tasks

  • Facilitate the development of pre-trained representations by providing fine-tuning and eval sets for a variety of SLU tasks

  • Foster the open exchange of research by focusing on freely available datasets that all academic and industrial groups can easily use.

For this benchmark, we provide new annotation of publicly available, natural speech data for training and evaluation. We also provide a benchmark suite including code to download and pre-process the SLUE datasets, train the baseline models, and evaluate performance on SLUE tasks. Refer to Toolkit and Paper for more details.



ASAPP     https://www.ttic.edu/img/logo.png     https://brand.cornell.edu/assets/images/downloads/logos/bold_cornell_logo/bold_cornell_logo.svg


SLUE committees

Suwon Shon - ASAPP
Felix Wu - ASAPP
Pablo Brusco - ASAPP
Kyu J. Han - ASAPP
Karen Livescu - TTI at Chicago
Ankita Pasad - TTI at Chicago
Yoav Artzi - Cornell University

Questions and issues

For open discussion, we will use GitHub issue page as our official Q&A boards.
For other general questions, please email us slue-committee “AT” googlegroups.com

Logo of the SLUE

Note that the logo of SLUE is not a snake, but a audio waveform in time-domain drawn with ascii character. We believe this text-drawn waveform represent the SLUE project properly. We use this generator to generate the waveform and sligltly modified.