Survey of machine-learning experimental methods atNeurIPS2019 and ICLR2020

Author(s): Xavier Bouthillier, Gaël Varoquaux
Venue: HAL
Year: 2020

Paper: https://hal.archives-ouvertes.fr/hal-02447823/document

Abstract

How do machine-learning researchers run their empirical validation? In the context of a push for improved reproducibility and benchmarking, this question is important to develop new tools for model comparison. This document summarizes a simple survey about experimental procedures, sent to authors of published papers at two leading conferences, NeurIPS2019 and ICLR 2020. It gives a simple picture of how hyperparameters are set, how many baselines and datasets are included, or how seeds are used.

Additional information

Github: https://github.com/bouthilx/ml-survey-2020

Blog: http://gael-varoquaux.info/science/survey-of-machine-learning-experimental-methods-at-neurips2019-and-iclr2020.html