[2502.05310] Oracular Programming: A Modular Foundation for Building LLM-Enabled Software
Summary
The paper introduces 'oracular programming,' a paradigm that integrates traditional computations with LLMs to enhance software reliability and modularity.
Why It Matters
As large language models (LLMs) become integral to software development, understanding how to effectively integrate them with traditional programming is crucial. This paradigm addresses challenges in building reliable software by allowing for modular composition and dynamic decision-making, which can significantly improve the development process and outcomes in AI applications.
Key Takeaways
- Oracular programming separates core logic from search logic for better modularity.
- It allows for dynamic decision-making using LLMs based on user-provided examples.
- The approach enhances reliability and scalability in software development.
Computer Science > Programming Languages arXiv:2502.05310 (cs) [Submitted on 7 Feb 2025 (v1), last revised 24 Feb 2026 (this version, v4)] Title:Oracular Programming: A Modular Foundation for Building LLM-Enabled Software Authors:Jonathan Laurent, André Platzer View a PDF of the paper titled Oracular Programming: A Modular Foundation for Building LLM-Enabled Software, by Jonathan Laurent and 1 other authors View PDF Abstract:Large Language Models can solve a wide range of tasks from just a few examples, but they remain difficult to steer and lack a capability essential for building reliable software at scale: the modular composition of computations under enforceable contracts. As a result, they are typically embedded in larger software pipelines that use domain-specific knowledge to decompose tasks and improve reliability through validation and search. Yet the complexity of writing, tuning, and maintaining such pipelines has so far limited their sophistication. We propose oracular programming: a foundational paradigm for integrating traditional, explicit computations with inductive oracles such as LLMs. It rests on two directing principles: the full separation of core and search logic, and the treatment of few-shot examples as grounded and evolvable program components. Within this paradigm, experts express high-level problem-solving strategies as programs with unresolved choice points. These choice points are resolved at runtime by LLMs, which generalize from user-provided...