While it has been shown that dynamically controlling an algorithms hyperparameters can significantly increase performance, most algorithms are not inherently compatible with DAC. This leads to DAC benchmarks being complicated to construct and also differing quite a bit in their interfaces. In order to both make it easier to compare DAC methods and construct new benchmarks, we designed DACBench as a library and a unified interface for DAC benchmarks.
DACBench can be installed from our GitHub repository.
There you'll also find examples and demonstrations of how to apply DAC methods to our benchmarks.
If you're interested in contributing a benchmark yourself, please have a look at our contribution guide and don't hesitate to contact us if you have any questions.
For comparison's sake, we provide data on from all static policies and a random baseline for each version of DACBench. Each baseline is run on 10 different seeds for 1000 episodes each.