← Back to Search

Partial Policy Gradients for RL in LLMs

β˜†β˜†β˜†β˜†β˜†Mar 6, 2026arxiv β†’

Puneet Mathur, Branislav Kveton, Subhojyoti Mukherjee, Viet Dac Lai

Abstract

Reinforcement learning is a framework for learning to act sequentially in an unknown environment. We propose a natural approach for modeling policy structure in policy gradients. The key idea is to optimize for a subset of future rewards: smaller subsets represent simpler policies, which can be learned more reliably because their empirical gradient estimates are more accurate. Our approach allows for modeling and comparison of different policy classes, including full planning, greedy, K-step lookahead, and segment policies. We evaluate the policies empirically on multiple persona-alignment conversational problems. Different policies excel in different problems, reflecting their different characteristics and highlighting the importance of our studied policy class.

Explain this paper

Ask this paper

Loading chat…

Rate this paper