← Back to Search

On the Direction of RLVR Updates for LLM Reasoning: Identification and Exploitation

β˜†β˜†β˜†β˜†β˜†Mar 23, 2026arxiv β†’
Kexin HuangHaoming MengJunkang WuJinda LuChiyu MaZiqian Chen+7 more

Abstract

Reinforcement learning with verifiable rewards (RLVR) has substantially improved the reasoning capabilities of large language models. While existing analyses identify that RLVR-induced changes are sparse, they primarily focus on the \textbf{magnitude} of these updates, largely overlooking their \textbf{direction}. In this work, we argue that the direction of updates is a more critical lens for understanding RLVR's effects, which can be captured by the signed, token-level log probability difference $Ξ”\log p$ between the base and final RLVR models. Through statistical analysis and token-replacement interventions, we demonstrate that $Ξ”\log p$ more effectively identifies sparse, yet reasoning-critical updates than magnitude-based metrics (\eg divergence or entropy). Building on this insight, we propose two practical applications: (1) a \textit{test-time extrapolation} method that amplifies the policy along the learned $Ξ”\log p$ direction to improve reasoning accuracy without further training; (2) a \textit{training-time reweighting} method that focuses learning on low-probability (corresponding to higher $Ξ”\log p$) tokens, which improves reasoning performance across models and benchmarks. Our work establishes the direction of change as a key principle for analyzing and improving RLVR.

Explain this paper

Ask this paper

Loading chat…

Rate this paper

Similar Papers