by , , , ,
Abstract:
Reinforcement learning algorithms typically consider discrete-time dynamics, even though the underlying systems are often continuous in time. In this paper, we introduce a model-based reinforcement learning algorithm that represents continuous-time dynamics using nonlinear ordinary differential equations (ODEs). We capture epistemic uncertainty using well-calibrated probabilistic models, and use the optimistic principle for exploration. Our regret bounds surface the importance of the measurement selection strategy (MSS), since in continuous time we not only must decide how to explore, but also when to observe the underlying system. Our analysis demonstrates that the regret is sublinear when modeling ODEs with Gaussian Processes (GP) for common choices of MSS, such as equidistant sampling. Additionally, we propose an adaptive, data-dependent, practical MSS that, when combined with GP dynamics, also achieves sublinear regret with significantly fewer samples. We showcase the benefits of continuous-time modeling over its discrete-time counterpart, as well as our proposed adaptive MSS over standard baselines, on several applications.
Reference:
Efficient Exploration in Continuous-time Model-based Reinforcement Learning L. Treven, J. Hübotter, B. Sukhija, F. Dörfler, A. KrauseIn Proc. Neural Information Processing Systems (NeurIPS), 2023
Bibtex Entry:
@inproceedings{treven2023efficient,
  title={Efficient Exploration in Continuous-time Model-based Reinforcement Learning},
  author={Treven, Lenart and H{\"u}botter, Jonas and Sukhija, Bhavya and D{\"o}rfler, Florian and Krause, Andreas},
  booktitle = {Proc. Neural Information Processing Systems (NeurIPS)},
  year={2023},
  month={December},
  pdf = {https://arxiv.org/abs/2310.19848.pdf}  
}