Automated Machine Learning

TNT members involved in this project:
Nobody is listed for this project right now!
Show all

To use machine learning (ML), users have to choose between many design options: (i) ML algorithms (ii) pre-processing techniques, (iii) post-processing techniques, (iv) hyperparameter settings, (v) architectures of neural networks and so on. These design decisions are often responsible whether ML systems return random predictions or achieve state-of-the-art performance. Unfortunately, even for ML-experts it is a tedious and error-prone task and thus it is not easy to make these decisions efficiently.

Automated machine learning (AutoML) addresses this challenge by automating the design process such that AutoML tools support users to efficiently develop new ML applications.

Hyperparameter Optimization and Bayesian Optimization

To achieve peak-performance with an algorithm, choosing an appropriate hyperparameter configuration is crucial. Since hyperparameters are often not very intuitive for human developers, it is a tedious and error-prone task to choose these settings. Bayesian Optimization is a sample-efficient approach to find such hyperparameter configurations in an automatic way, saving human developers tremendous amounts of development time.

Neural Architecture Search

Applying deep learning to new datasets also requires to find a well-performing architecture of a deep neural network. Such an architecture influences the performance, but also other metrics, such as inference time, memory consumption etc pp. Unfortunately, it is again not obvious for human developers how to design such deep neural networks making the process fairly inefficient. Neural architecture search is an paradigma to automatically determine the best architectures for new datasets, making new applications of deep learning feasible also at larger scale.

Dynamic Algorithm Configuration

Instead of choosing the hyperparameters of an ML algorithm once, many hyperparameters have to be adapted over time. A well-known example is the learning rate of a deep neural network, which is decreased, sometimes also increased, over time. So far, these dynamic hyperparameters are controlled by a human-designed heuristic, which is often not optimal for a new dataset. Therefore, we develop new approaches for dynamic algorithm configuration, which learns from data how to adjust these on-the-fly.

Interpretability of AutoML 

A major drawback of AutoML tools is the risk that ML will be even a more mysterious black box than it ever was. Therefore, we also develop analysis tools that provide feedback to AutoML users about important insights, such as, (i) how to use AutoML tools more efficiently or (ii) which hyperparameter decisions were important to achieve the final performance. This helps ML developers to get a better understanding of why and how ML and AutoML works.

Show recent publications only
  • Conference Contributions
    • Carl Hvarfner, Danny Stoll, Artur Souza, Marius Lindauer, Frank Hutter, Luigi Nardi
      piBO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization
      10th International Conference on Learning Representations, ICLR'22, OpenReview, pp. 1-30, April 2022
    • André Biedenkapp, David Speck, Silvan Sievers, Frank Hutter, Marius Lindauer, Jendrik Seipp
      Learning Domain-Independent Policies for Open List Selection
      Proceedings of the 3rd ICAPS workshop on Bridging the Gap Between AI Planning and Reinforcement Learning (PRL), pp. 1-9, 2022
    • Katharina Eggensperger, Philipp Müller, Neeratyoy Mallik, Matthias Feurer, René Sass, Aaron Klein, Noor Awad, Marius Lindauer, Frank Hutter
      HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO
      Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track), December 2021
    • Julia Moosbauer, Julia Herbinger, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl
      Explaining Hyperparameter Optimization via Partial Dependence Plots
      Proceedings of the international conference on Neural Information Processing Systems (NeurIPS), December 2021
    • Arlind Kadra, Marius Lindauer, Frank Hutter, Josif Grabocka
      Regularization is all you Need: Simple Neural Nets can Excel on Tabular Data
      Proceedings of the international conference on Neural Information Processing Systems (NeurIPS), December 2021
    • Theresa Eimer, Carolin Benjamins, Marius Lindauer
      Hyperparameters in Contextual RL are Highly Situational
      NeurIPS 2021 Workshop on Ecological Theory of Reinforcement Learning, December 2021
    • Carolin Benjamins, Theresa Eimer, Frederik Schubert, André Biedenkapp, Bodo Rosenhahn, Frank Hutter, Marius Lindauer
      CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning
      NeurIPS 2021 Workshop on Ecological Theory of Reinforcement Learning, December 2021
    • Artur Souza, Luigi Nardi, Leonardo Oliveira, Kunle Olukotun, Marius Lindauer, Frank Hutter
      Bayesian Optimization with a Prior for the Optimum
      Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), September 2021
    • David Speck, André Biedenkapp, Frank Hutter, Robert Mattmüller, Marius Lindauer
      Learning Heuristic Selection with Dynamic Algorithm Configuration
      Proceedings of the 31st International Conference on Automated Planning and Scheduling {(ICAPS'21)}, August 2021
    • Theresa Eimer, André Biedenkapp, Maximilian Reimer, Steven Adriaensen, Frank Hutter, Marius Lindauer
      DACBench: A Benchmark Library for Dynamic Algorithm Configuration
      Proceedings of the international joint conference on artificial intelligence (IJCAI), August 2021
    • Julia Guerrero-Viu, Sven Hauns, Sergio Izquierdo, Guilherme Miotto, Simon Schrodi, Andre Biedenkapp, Thomas Elsken, Difan Deng, Marius Lindauer, Frank Hutter
      Bag of Baselines for Multi-objective Joint Neural Architecture Search and Hyperparameter Optimization
      Proceedings of the international workshop on Automated Machine Learning (AutoML) at ICML'21, July 2021
    • Andre Biedenkapp, Raghu Rajan, Frank Hutter, Marius Lindauer
      TempoRL: Learning When to Act
      Proceedings of the international conference on machine learning (ICML), July 2021
    • Theresa Eimer, Andre Biedenkapp, Frank Hutter, Marius Lindauer
      Self-Paced Context Evaluation for Contextual Reinforcement Learning
      Proceedings of the international conference on machine learning (ICML), July 2021
    • Julia Moosbauer, Julia Herbinger, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl
      Towards Explaining Hyperparameter Optimization via Partial Dependence Plots
      Proceedings of the international workshop on Automated Machine Learning (AutoML) at ICML'21, July 2021
    • Artur Souza, Luigi Nardi, Leonardo Oliveira, Kunle Olukotun, Marius Lindauer, Frank Hutter
      Prior-guided Bayesian Optimization
      Proceedings of the Workshop on Meta-Learning (NeurIPS), pp. 1-19, December 2020
    • Berend Denkena, Marc Dittrich, Marius Lindauer, Julia Mainka, Lukas Stürenburg
      Using AutoML to Optimize Shape Error Prediction in Milling Processes
      Proceedings of 20th Machining Innovations Conference for Aerospace Industry (MIC), December 2020
    • Gresa Shala, Andre Biedenkapp, Noor Awad, Steven Adriaensen, Marius Lindauer, Frank Hutter
      Learning Step-Size Adaptation in CMA-ES
      Proceedings of the Sixteenth International Conference on Parallel Problem Solving from Nature ({PPSN}'20), September 2020
    • Theresa Eimer, Andre Biedenkapp, Frank Hutter, Marius Lindauer
      Towards Self-Paced Context Evaluations for Contextual Reinforcement Learning
      Workshop on Inductive Biases, Invariances and Generalization in Reinforcement Learning (BIG@ICML'20), July 2020
    • David Speck, André Biedenkapp, Frank Hutter, Robert Mattmüller, Marius Lindauer
      Learning Heuristic Selection with Dynamic Algorithm Configuration
      Proceedings of international workshop on Bridging the Gap Between AI Planning and Reinforcement Learning at ICAPS, June 2020
    • Andre Biedenkapp, H. Furkan Bozkurt, Theresa Eimer, Frank Hutter, Marius Lindauer
      Algorithm Control: Foundation of a New Meta-Algorithmic Framework
      Proceedings of the European Conference on Artificial Intelligence (ECAI), 2020
    • M. Lindauer and M. Feurer and K. Eggensperger and A. Biedenkapp and F. Hutter
      Towards Assessing the Impact of Bayesian Optimization's Own Hyperparameters
      {IJCAI} 2019 {DSO} Workshop, August 2019
    • M. Feurer and K. Eggensperger and S. Falkner and M. Lindauer and F. Hutter
      Practical Automated Machine Learning for the AutoML Challenge 2018
      ICML 2018 AutoML Workshop, July 2018
    • K. Eggensperger and M. Lindauer and F. Hutter
      Neural Networks for Predicting Algorithm Runtime Distributions
      Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’18), pp. 1442-1448, July 2018
    • A. Biedenkapp and J. Marben and M. Lindauer and F. Hutter
      CAVE: Configuration Assessment, Visualization and Evaluation
      Proceedings of the International Conference on Learning and Intelligent Optimization (LION'18), June 2018
  • Journals
    • Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, Frank Hutter
      Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning
      Journal of Machine Learning Research (JMLR), Vol. 23, No. 261, p. 1−61, October 2022
    • Marius Lindauer, Katharina Eggensperger, Matthias Feurer, André Biedenkapp, Difan Deng, Carolin Benjamins, Tim Ruhkopf, René Sass, Frank Hutter
      SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization
      Journal of Machine Learning Research (JMLR) -- MLOSS, Vol. 23, No. 54, pp. 1-9, January 2022
    • Jack Parker-Holder, Raghu Rajan, Xingyou Song, André Biedenkapp, Yingjie Miao, Theresa Eimer, Baohe Zhang, Vu Nguyen, Roberto Calandra, Aleksandra Faust, Frank Hutter, Marius Lindauer
      Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
      Journal of Artificial Intelligence Research (JAIR), 2022
    • Lucas Zimmer, Marius Lindauer, Frank Hutter
      Auto-PyTorch Tabular: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL
      IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE, Vol. 43, No. 9, pp. 3079 - 3090, August 2021
    • Zhengying Liu, Adrien Pavao, Zhen Xu, Sergio Escalera, Fabio Ferreira, Isabelle Gyon, Sirui Hong, Frank Hutter, Rongrong Ji, Julio Jacques Junior, Ge Li, Marius Lindauer, Zhipeng Luo, Meysam Madadi, Thomas Nierhoff, Kangning Niu, Chunguang Pan, Danny Stoll, Sebastien Treguer, Wang Jin, Peng Wang, Chenglin Wu, Xiong Youcheng, Arber Zela, Yang Zhang
      Winning solutions and post-challenge analyses of the ChaLearn AutoDL challenge 2019
      IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE, Vol. 43, No. 9, pp. 3108 - 3125, August 2021
    • Marius Lindauer and Frank Hutter
      Best Practices for Scientific Research on Neural Architecture Search
      Journal of Machine Learning Research, Vol. 21, pp. 1-18, December 2020
  • Book Chapters
    • Hector Mendoza and Aaron Klein and Matthias Feurer and Jost Tobias Springenberg and Matthias Urban and Michael Burkart and Max Dippel and Marius Lindauer and Frank Hutter
      Towards Automatically-Tuned Deep Neural Networks
      AutoML: Methods, Sytems, Challenges, Springer, pp. 141--156, December 2018, edited by Hutter, Frank and Kotthoff, Lars and Vanschoren, Joaquin
  • Technical Report
    • René Sass, Eddie Bergman, André Biedenkapp, Frank Hutter, Marius Lindauer
      DeepCAVE: An Interactive Analysis Tool for Automated Machine Learning
      Workshop on Adaptive Experimental Design and Active Learning in the Real World (ReALML@ICML’22), p. 6, June 2022
    • Steven Adriaensen, André Biedenkapp, Gresa Shala, Noor Awad, Theresa Eimer, Marius Lindauer, Frank Hutter
      Automated Dynamic Algorithm Configuration
      ArXiv, May 2022
    • Katharina Eggensperger, Kai Haase, Philipp Müller, Marius Lindauer, Frank Hutter
      Neural Model-based Optimization with Right-Censored Observations
      CoRR, ArXiv, September 2020
    • M. Lindauer and K. Eggensperger and M. Feurer and A. Biedenkapp and J. Marben and P. M\"uller and F. Hutter
      BOAH: A Tool Suite for Multi-Fidelity Bayesian Optimization & Analysis of Hyperparameters
      arXiv:1908.06756 [cs.LG], August 2019