Dynaboard: An evaluation-as-a-service platform for holistic next-generation benchmarking

Z Ma, K Ethayarajh, T Thrush, S Jain… - Advances in …, 2021 - proceedings.neurips.cc
Advances in Neural Information Processing Systems, 2021proceedings.neurips.cc
We introduce Dynaboard, an evaluation-as-a-service framework for hosting benchmarks
and conducting holistic model comparison, integrated with the Dynabench platform. Our
platform evaluates NLP models directly instead of relying on self-reported metrics or
predictions on a single dataset. Under this paradigm, models are submitted to be evaluated
in the cloud, circumventing the issues of reproducibility, accessibility, and backwards
compatibility that often hinder benchmarking in NLP. This allows users to interact with …
Abstract
We introduce Dynaboard, an evaluation-as-a-service framework for hosting benchmarks and conducting holistic model comparison, integrated with the Dynabench platform. Our platform evaluates NLP models directly instead of relying on self-reported metrics or predictions on a single dataset. Under this paradigm, models are submitted to be evaluated in the cloud, circumventing the issues of reproducibility, accessibility, and backwards compatibility that often hinder benchmarking in NLP. This allows users to interact with uploaded models in real time to assess their quality, and permits the collection of additional metrics such as memory use, throughput, and robustness, which--despite their importance to practitioners--have traditionally been absent from leaderboards. On each task, models are ranked according to the Dynascore, a novel utility-based aggregation of these statistics, which users can customize to better reflect their preferences, placing more/less weight on a particular axis of evaluation or dataset. As state-of-the-art NLP models push the limits of traditional benchmarks, Dynaboard offers a standardized solution for a more diverse and comprehensive evaluation of model quality.
proceedings.neurips.cc