Consistency-Constrained Large Language Models for Reliable Legal Reasoning

Main Article Content

Lyrissa Lidsky

Abstract

Large language models (LLMs) exhibit strong general reasoning capabilities but remain vulnerable to instability and hallucinations, particularly under small variations in user prompts. This work introduces a consistency-constrained framework that integrates counterfactual alignment signals into both training and decoding, reducing divergence between predictions on semantically equivalent inputs. Experiments across multiple reasoning benchmarks show that the proposed model significantly improves stability and lowers hallucination rates without sacrificing task accuracy. These findings demonstrate that consistency can be explicitly shaped as an intrinsic property of LLMs, enabling more predictable and reliable reasoning behavior.

Article Details

How to Cite
Lidsky, L. (2025). Consistency-Constrained Large Language Models for Reliable Legal Reasoning. Journal of Computer Science and Software Applications, 5(12). Retrieved from https://mfacademia.org/index.php/jcssa/article/view/252
Section
Articles