Virel: A variational inference framework for reinforcement learning

M Fellows, A Mahajan, TGJ Rudner… - Advances in neural …, 2019 - proceedings.neurips.cc
Advances in neural information processing systems, 2019proceedings.neurips.cc
Applying probabilistic models to reinforcement learning (RL) enables the uses of powerful
optimisation tools such as variational inference in RL. However, existing inference
frameworks and their algorithms pose significant challenges for learning optimal policies,
eg, the lack of mode capturing behaviour in pseudo-likelihood methods, difficulties learning
deterministic policies in maximum entropy RL based approaches, and a lack of analysis
when function approximators are used. We propose VIREL, a theoretically grounded …
Abstract
Applying probabilistic models to reinforcement learning (RL) enables the uses of powerful optimisation tools such as variational inference in RL. However, existing inference frameworks and their algorithms pose significant challenges for learning optimal policies, eg, the lack of mode capturing behaviour in pseudo-likelihood methods, difficulties learning deterministic policies in maximum entropy RL based approaches, and a lack of analysis when function approximators are used. We propose VIREL, a theoretically grounded probabilistic inference framework for RL that utilises a parametrised action-value function to summarise future dynamics of the underlying MDP, generalising existing approaches. VIREL also benefits from a mode-seeking form of KL divergence, the ability to learn deterministic optimal polices naturally from inference, and the ability to optimise value functions and policies in separate, iterative steps. In applying variational expectation-maximisation to VIREL, we thus show that the actor-critic algorithm can be reduced to expectation-maximisation, with policy improvement equivalent to an E-step and policy evaluation to an M-step. We then derive a family of actor-critic methods fromVIREL, including a scheme for adaptive exploration. Finally, we demonstrate that actor-critic algorithms from this family outperform state-of-the-art methods based on soft value functions in several domains.
proceedings.neurips.cc