← Back to Search

Evaluating Counterfactual Strategic Reasoning in Large Language Models

☆☆☆☆☆Mar 19, 2026arxiv →
Dimitrios GeorgousisMaria LymperaiouAngeliki DimitriouGiorgos FilandrianosGiorgos Stamou

Abstract

We evaluate Large Language Models (LLMs) in repeated game-theoretic settings to assess whether strategic performance reflects genuine reasoning or reliance on memorized patterns. We consider two canonical games, Prisoner's Dilemma (PD) and Rock-Paper-Scissors (RPS), upon which we introduce counterfactual variants that alter payoff structures and action labels, breaking familiar symmetries and dominance relations. Our multi-metric evaluation framework compares default and counterfactual instantiations, showcasing LLM limitations in incentive sensitivity, structural generalization and strategic reasoning within counterfactual environments.

Explain this paper

Ask this paper

Loading chat…

Rate this paper