General Game Playing

TNT members involved in this project:
Jun.-Prof. Dr.-Ing. Alexander Dockhorn

Many methods of autonomous game-playing are related to a single game. In the context of general game-playing, agents are developed that not only play a single game successfully, but are able to perform well in multiple games. Research in this area allows us to test capabilities of developed algorithms under a variety of different conditions. Due to their diversity and accessibility, games offer an ideal benchmark environment for machine learning algorithms and present them with numerous interesting challenges, helping us to better understand the inner-workings and further improve their performance.

General game playing can be achieved using different approaches. Especially, heuristic search algorithm such as Monte Carl Tree Search have shown to perform well in a variety of tasks when being provided a model of their environment (e.g. the rules of a game). on the use of flexible search techniques or it can be enabled by transfer learning. In the latter, partial models of the developed solution are transferred between applications to build on previously learned knowledge and thus accelerate the re-learning process.

Show all publications
  • Alexander Dockhorn, Rudolf Kruse
    Balancing Exploration and Exploitation in Forward Model Learning
    Advances in Intelligent Systems Research and Innovation, Springer International Publishing, pp. 1--19, Cham, 2022, edited by Sgurev, Vassil; Jotsov, Vladimir; Kacprzyk, Janusz
  • Linjie Xu, Jorge Hurtado-Grueso, Dominic Jeurissen, Diego Perez Liebana, Alexander Dockhorn
    Elastic Monte Carlo Tree Search State Abstraction for Strategy Game Playing
    2022 IEEE Conference on Games (CoG), IEEE, 2022
  • Linjie Xu, Diego Perez-Liebana, Alexander Dockhorn
    Towards Applicable State Abstractions: a Preview in Strategy
    The Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM) - RL as a Model of Agency, pp. 1-7, 2022