← Back to Search

Delta1 with LLM: symbolic and neural integration for credible and explainable reasoning

☆☆☆☆☆Mar 13, 2026arxiv →

Yang Xu, Jun Liu, Shuwei Chen, Chris Nugent, Hailing Guo

Abstract

Neuro-symbolic reasoning increasingly demands frameworks that unite the formal rigor of logic with the interpretability of large language models (LLMs). We introduce an end to end explainability by construction pipeline integrating the Automated Theorem Generator Delta1 based on the full triangular standard contradiction (FTSC) with LLMs. Delta1 deterministically constructs minimal unsatisfiable clause sets and complete theorems in polynomial time, ensuring both soundness and minimality by construction. The LLM layer verbalizes each theorem and proof trace into coherent natural language explanations and actionable insights. Empirical studies across health care, compliance, and regulatory domains show that Delta1 and LLM enables interpretable, auditable, and domain aligned reasoning. This work advances the convergence of logic, language, and learning, positioning constructive theorem generation as a principled foundation for neuro-symbolic explainable AI.

Explain this paper

Ask this paper

Loading chat…

Rate this paper