[2604.04869] Optimizing LLM Prompt Engineering with DSPy Based Declarative Learning
About this article
Abstract page for arXiv paper 2604.04869: Optimizing LLM Prompt Engineering with DSPy Based Declarative Learning
Computer Science > Machine Learning arXiv:2604.04869 (cs) [Submitted on 6 Apr 2026] Title:Optimizing LLM Prompt Engineering with DSPy Based Declarative Learning Authors:Shiek Ruksana, Sailesh Kiran Kurra, Thipparthi Sanjay Baradwaj View a PDF of the paper titled Optimizing LLM Prompt Engineering with DSPy Based Declarative Learning, by Shiek Ruksana and 2 other authors View PDF Abstract:Large Language Models (LLMs) have shown strong performance across a wide range of natural language processing tasks; however, their effectiveness is highly dependent on prompt design, structure, and embedded reasoning signals. Conventional prompt engineering methods largely rely on heuristic trial-and-error processes, which limits scalability, reproducibility, and generalization across tasks. DSPy, a declarative framework for optimizing text-processing pipelines, offers an alternative approach by enabling automated, modular, and learnable prompt construction for LLM-based this http URL paper presents a systematic study of DSPy-based declarative learning for prompt optimization, with emphasis on prompt synthesis, correction, calibration, and adaptive reasoning control. We introduce a unified DSPy LLM architecture that combines symbolic planning, gradient free optimization, and automated module rewriting to reduce hallucinations, improve factual grounding, and avoid unnecessary prompt complexity. Experimental evaluations conducted on reasoning tasks, retrieval-augmented generation, and multi-...