by , ,
Abstract:
Training Reinforcement Learning (RL) agents online in high-stakes applications is often prohibitive due to the risk associated with exploration. Thus, the agent can only use data previously collected by safe policies. While previous work considers optimizing the \em average performance using offline data, we focus on optimizing a risk-averse criterion. In particular, we present the Offline Risk-Averse Actor-Critic (O-RAAC), a model-free RL algorithm that is able to learn risk-averse policies in a fully offline setting. We show that O-RAAC learns policies with higher risk-averse performance than risk-neutral approaches in different robot control tasks. Furthermore, considering risk-averse criteria guarantees distributional robustness of the average performance with respect to particular distribution shifts. We demonstrate empirically that in the presence of natural distribution-shifts, O-RAAC learns policies with good average performance.
Reference:
Risk-Averse Offline Reinforcement Learning N. A. Urpí, S. Curi, A. KrauseIn Proc. International Conference on Learning Representations (ICLR), 2021
Bibtex Entry:
@inproceedings{urpi2021riskaverseperformance using offline data, we focus on optimizing a  risk-averse criterion.
In particular, we present the Offline Risk-Averse Actor-Critic (O-RAAC), a model-free RL algorithm that is able to learn risk-averse policies in a fully offline setting.
We show that O-RAAC learns policies with higher risk-averse performance than risk-neutral approaches in different robot control tasks.
Furthermore, considering risk-averse criteria guarantees distributional robustness of the average performance with respect to particular distribution shifts.
We demonstrate empirically that in the presence of natural distribution-shifts, O-RAAC learns policies with good average performance. },
	author = {N\'uria Armengol Urp\'i and Sebastian Curi and Andreas Krause},
	booktitle = {Proc. International Conference on Learning Representations (ICLR)},
	month = {May},
	title = {Risk-Averse Offline Reinforcement Learning},
	year = {2021}}