← Back to Search

Visualization of Machine Learning Models through Their Spatial and Temporal Listeners

☆☆☆☆☆Mar 29, 2026arxiv →
Siyu WuLei ShiLei XiaCenyang WuZipeng LiuYingchaojie Feng+2 more

Abstract

Model visualization (ModelVis) has emerged as a major research direction, yet existing taxonomies are largely organized by data or tasks, making it difficult to treat models as first-class analysis objects. We present a model-centric two-stage framework that employs abstract listeners to capture spatial and temporal model behaviors, and then connects the translated model behavior data to the classical InfoVis pipeline. To apply the framework at scale, we build a retrieval-augmented human--large language model (LLM) extraction workflow and curate a corpus of 128 VIS/VAST ModelVis papers with 331 coded figures. Our analysis shows a dominant result-centric priority on visualizing model outcomes, quantitative/nominal data type, statistical charts, and performance evaluation. Citation-weighted trends further indicate that less frequent model-mechanism-oriented studies have disproportionately high impact while are less investigated recently. Overall, the framework is a general approach for comparing existing ModelVis systems and guiding possible future designs.

Explain this paper

Ask this paper

Loading chat…

Rate this paper

Similar Papers