← Back to Search

Exclusive Self Attention

β˜†β˜†β˜†β˜†β˜†Mar 10, 2026arxiv β†’

Shuangfei Zhai

Abstract

We introduce exclusive self attention (XSA), a simple modification of self attention (SA) that improves Transformer's sequence modeling performance. The key idea is to constrain attention to capture only information orthogonal to the token's own value vector (thus excluding information of self position), encouraging better context modeling. Evaluated on the standard language modeling task, XSA consistently outperforms SA across model sizes up to 2.7B parameters and shows increasingly larger gains as sequence length grows.

Explain this paper

Ask this paper

Loading chat…

Rate this paper