SoLA: Solver-Layer Adaption of LLM for Better Logic Reasoning

Y Zhang, HL Zhen, Z Pei, Y Lian, L Yin, M Yuan… - arXiv preprint arXiv …, 2024 - arxiv.org
Y Zhang, HL Zhen, Z Pei, Y Lian, L Yin, M Yuan, B Yu
arXiv preprint arXiv:2402.11903, 2024arxiv.org
Considering the challenges faced by large language models (LLMs) on logical reasoning,
prior efforts have sought to transform problem-solving through tool learning. While progress
has been made on small-scale problems, solving industrial cases remains difficult due to
their large scale and intricate expressions. In this paper, we propose a novel solver-layer
adaptation (SoLA) method, where we introduce a solver as a new layer of the LLM to
differentially guide solutions towards satisfiability. In SoLA, LLM aims to comprehend the …
Considering the challenges faced by large language models (LLMs) on logical reasoning, prior efforts have sought to transform problem-solving through tool learning. While progress has been made on small-scale problems, solving industrial cases remains difficult due to their large scale and intricate expressions. In this paper, we propose a novel solver-layer adaptation (SoLA) method, where we introduce a solver as a new layer of the LLM to differentially guide solutions towards satisfiability. In SoLA, LLM aims to comprehend the search space described in natural language and identify local solutions of the highest quality, while the solver layer focuses solely on constraints not satisfied by the initial solution. Leveraging MaxSAT as a bridge, we define forward and backward transfer gradients, enabling the final model to converge to a satisfied solution or prove unsatisfiability. The backdoor theory ensures that SoLA can obtain accurate solutions within polynomial loops. We evaluate the performance of SoLA on various datasets and empirically demonstrate its consistent outperformance against existing symbolic solvers (including Z3 and Kissat) and tool-learning methods in terms of efficiency in large-scale problem-solving.
arxiv.org