To achieve top performance, applying deep learning to new tasks often requires new architectures of deep neural networks. However, finding such architectures is often a tedious and error-prone task for human developers. The field of neural architecture search (NAS) addresses this challenge by studying different ways of automatically finding optimal architectures. So far, neural architecture search (NAS) was mostly applied to achieve good performance in terms of accuracy (or similar generalization metrics). A current research direction is not only to achieve great accuracy, but also to achieve it fast---in some applications, you cannot afford to wait hours or days to train your neural network. To this end, a recent competition, called DAWNbench, measured time-to-accuracy, i.e., how fast can a neural network achieve a pre-defined accuracy (e.g., 94% on CIFAR-10). In this project, we will study how we can build a NAS system that performs well wrt time-to-accuracy by applying techniques from algorithm configuration.
We strongly recommend that students should have hands-on experience for the following topics:
Ansprechpartner: Marius Lindauer