← Back to Search

Multi-Aspect Knowledge Distillation for Language Model with Low-rank Factorization

☆☆☆☆☆Apr 3, 2026arxiv →
Zihe LiuYulong MaoJinan XuXinrui PengKaiyu Huang

Abstract

Knowledge distillation is an effective technique for pre-trained language model compression. However, existing methods only focus on the knowledge distribution among layers, which may cause the loss of fine-grained information in the alignment process. To address this issue, we introduce the Multi-aspect Knowledge Distillation (MaKD) method, which mimics the self-attention and feed-forward modules in greater depth to capture rich language knowledge information at different aspects. Experimental results demonstrate that MaKD can achieve competitive performance compared with various strong baselines with the same storage parameter budget. In addition, our method also performs well in distilling auto-regressive architecture models.

Explain this paper

Ask this paper

Loading chat…

Rate this paper

Similar Papers