Pré-Publication, Document De Travail Année : 2025

An analysis of distributional reinforcement learning with Gaussian mixtures

Résumé

Distributional Reinforcement Learning (DRL) aims at optimizing a risk measure of the return by representing its distribution. However, finding a representation of this distribution is challenging as it requires a tractable estimation of the risk measure, a tractable loss, and a representation with enough approximation power. Although Gaussian mixtures (GM) are powerful statistical models to solve these challenges, only very few papers have investigated this approach and most use the L 2 space norm as a tractable metric between GM. In this paper, we provide new theoretical results on previously unstudied metrics. We show that the L 2 metric is not suitable and propose alternative metrics, a mixture-specific optimal transport (MW) distance and a maximum mean discrepancy distance. Focusing on TD learning, we prove a convergence result for a related dynamic programming algorithm for the MW metric. Leveraging natural multivariate GM representations, we also highlight the potential of MW in multi-objective RL. Our approach is illustrated on some environments of the Atari Learning Environment benchmark and shows promising empirical results.
Fichier principal
Vignette du fichier
4Hal-coltDRL.pdf (653) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04941480 , version 1 (11-02-2025)

Licence

Identifiants

  • HAL Id : hal-04941480 , version 1

Citer

Mathis Antonetti, Henrique Donãncio, Florence Forbes. An analysis of distributional reinforcement learning with Gaussian mixtures. 2025. ⟨hal-04941480⟩
0 Consultations
0 Téléchargements

Partager

More