← Back to Search

Language Model Maps for Prompt-Response Distributions via Log-Likelihood Vectors

☆☆☆☆☆Mar 19, 2026arxiv →
Yusuke TakaseMomose OyamaHidetoshi Shimodaira

Abstract

We propose a method that represents language models by log-likelihood vectors over prompt-response pairs and constructs model maps for comparing their conditional distributions. In this space, distances between models approximate the KL divergence between the corresponding conditional distributions. Experiments on a large collection of publicly available language models show that the maps capture meaningful global structure, including relationships to model attributes and task performance. The method also captures systematic shifts induced by prompt modifications and their approximate additive compositionality, suggesting a way to analyze and predict the effects of composite prompt operations. We further introduce pointwise mutual information (PMI) vectors to reduce the influence of unconditional distributions; in some cases, PMI-based model maps better reflect training-data-related differences. Overall, the framework supports the analysis of input-dependent model behavior.

Explain this paper

Ask this paper

Loading chat…

Rate this paper