RetrySQL: text-to-SQL training with retry data for self-correcting query generation

arXiv — cs.LGMonday, November 17, 2025 at 5:00:00 AM
arXiv:2507.02529v2 Announce Type: replace Abstract: The text-to-SQL task is an active challenge in Natural Language Processing. Many existing solutions focus on using black-box language models extended with specialized components within customized end-to-end text-to-SQL pipelines. While these solutions use both closed-source proprietary language models and coding-oriented open-source models, there is a lack of research regarding SQL-specific generative models. At the same time, recent advancements in self-correcting generation strategies show promise for improving the capabilities of existing architectures. The application of these concepts to the text-to-SQL task remains unexplored. In this paper, we introduce RetrySQL, a new approach to training text-to-SQL generation models. We prepare reasoning steps for reference SQL queries and then corrupt them to create retry data that contains both incorrect and corrected steps, divided with a special token. We continuously pre-train an open-source coding model with this data and demonstrate that retry steps yield an improvement of up to 4 percentage points in both overall and challenging execution accuracy metrics, compared to pre-training without retry data. Additionally, we confirm that supervised fine-tuning with LoRA is ineffective for learning from retry data and that full-parameter pre-training is a necessary requirement for that task. We showcase that the self-correcting behavior is learned by the model and the increase in downstream accuracy metrics is a result of this additional skill. Finally, we incorporate RetrySQL-trained models into the full text-to-SQL pipeline and showcase that they are competitive in terms of execution accuracy with proprietary models that contain orders of magnitude more parameters. RetrySQL demonstrates that self-correction can be learned in the text-to-SQL task and provides a novel way of improving generation accuracy for SQL-oriented language models.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
X-VMamba: Explainable Vision Mamba
PositiveArtificial Intelligence
The X-VMamba model introduces a controllability-based interpretability framework for State Space Models (SSMs), particularly the Mamba architecture. This framework aims to clarify how Vision SSMs process spatial information, which has been a challenge due to the absence of transparent mechanisms. The proposed methods include a Jacobian-based approach for any SSM architecture and a Gramian-based method for diagonal SSMs, both designed to enhance understanding of internal state dynamics while maintaining computational efficiency.
LoRaCompass: Robust Reinforcement Learning to Efficiently Search for a LoRa Tag
PositiveArtificial Intelligence
The Long-Range (LoRa) protocol is increasingly used in tags for mentally incapacitated persons (MIPs) to prevent them from going missing. A new study introduces LoRaCompass, a reinforcement learning model aimed at efficiently locating these LoRa tags in unknown environments. This model addresses challenges such as domain shift and signal fluctuation, which can lead to significant localization errors, by learning robust spatial representations from received signal strength indicators (RSSI).
destroR: Attacking Transfer Models with Obfuscous Examples to Discard Perplexity
NeutralArtificial Intelligence
The paper titled 'destroR: Attacking Transfer Models with Obfuscous Examples to Discard Perplexity' discusses advancements in machine learning and neural networks, particularly in natural language processing. It highlights the vulnerabilities of machine learning models and proposes a novel adversarial attack strategy that generates ambiguous inputs to confuse these models. The research aims to enhance the robustness of machine learning systems by developing adversarial instances with maximum perplexity.
NP-LoRA: Null Space Projection Unifies Subject and Style in LoRA Fusion
PositiveArtificial Intelligence
The article introduces NP-LoRA, a novel framework for Low-Rank Adaptation (LoRA) fusion that addresses the issue of interference in existing methods. Traditional weight-based merging often leads to one LoRA dominating another, resulting in degraded fidelity. NP-LoRA utilizes a projection-based approach to maintain subspace separation, thereby enhancing the quality of fusion by preventing structural interference among principal directions.