← Back to Search

Operator-Theoretic Foundations and Policy Gradient Methods for General MDPs with Unbounded Costs

☆☆☆☆☆Mar 18, 2026arxiv →
Abhishek GuptaAditya Mahajan

Abstract

Markov decision processes (MDPs) is viewed as an optimization of an objective function over certain linear operators over general function spaces. Using the well-established perturbation theory of linear operators, this viewpoint allows one to identify derivatives of the objective function as a function of the linear operators. This leads to generalization of many well-known results in reinforcement learning to cases with generate state and action spaces. Prior results of this type were only established in the finite-state finite-action MDP settings and in settings with certain linear function approximations. The framework also leads to new low-complexity PPO-type reinforcement learning algorithms for general state and action space MDPs.

Explain this paper

Ask this paper

Loading chat…

Rate this paper