← Back to Search

Sensivity of LLMs' Explanations to the Training Randomness:Context, Class & Task Dependencies

☆☆☆☆☆Mar 9, 2026arxiv →

Romain Loncour, Jérémie Bogaert, François-Xavier Standaert

Abstract

Transformer models are now a cornerstone in natural language processing. Yet, explaining their decisions remains a challenge. It was shown recently that the same model trained on the same data with a different randomness can lead to very different explanations. In this paper, we investigate how the (syntactic) context, the classes to be learned and the tasks influence this explanations' sensitivity to randomness. We show that they all have statistically significant impact: smallest for the (syntactic) context, medium for the classes and largest for the tasks.

Explain this paper

Ask this paper

Loading chat…

Rate this paper