A unified view of entropy-regularized Markov decision processes

日付:

2020年5月21日

著者:

Hrvoje Stojic

Abstract

We propose a general framework for entropy-regularized average-reward reinforcement learning in Markov decision processes (MDPs). Our approach is based on extending the linear-programming formulation of policy optimization in MDPs to accommodate convex regularization functions. Our key result is showing that using the conditional entropy of the joint state-action distributions as regularization yields a dual optimization problem closely resembling the Bellman optimality equations. This result enables us to formalize a number of state-of-the-art entropy-regularized reinforcement learning algorithms as approximate variants of Mirror Descent or Dual Averaging, and thus to argue about the convergence properties of these methods. In particular, we show that the exact version of the TRPO algorithm of Schulman et al. (2015) actually converges to the optimal policy, while the entropy-regularized policy gradient methods of Mnih et al. (2016) may fail to converge to a fixed point. Finally, we illustrate empirically the effects of using various regularization techniques on learning performance in a simple reinforcement learning setup.


Notes


  • ArXiV preprint can be found here

  • Gergely Neu is a Research Assistant Professor, AI group, DTIC, Universitat Pompeu Fabra. His personal website can be found here.

ソーシャルメディアで共有

ソーシャルメディアで共有

ソーシャルメディアで共有

ソーシャルメディアで共有

関連するセミナー

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

2024/06/24

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

2024/06/24

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

2024/06/24

Leveraging replication in active learning

Mickael Binois - INRIA Sophia Antipolis - Méditerranée

2024/06/24

From data to confident decisions

Ilija Bogunovic - University College London

2024/06/13

From data to confident decisions

Ilija Bogunovic - University College London

2024/06/13

From data to confident decisions

Ilija Bogunovic - University College London

2024/06/13

From data to confident decisions

Ilija Bogunovic - University College London

2024/06/13

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

2024/05/23

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

2024/05/23

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

2024/05/23

Preference learning with Gaussian processes

Dario Azzimonti - IDSIA

2024/05/23

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

2024/03/28

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

2024/03/28

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

2024/03/28

Optimal experiment design in Markov chains

Mojmír Mutný - ETH Zurich

2024/03/28