[2603.01494] Inference-Time Safety For Code LLMs Via Retrieval-Augmented Revision

[2603.01494] Inference-Time Safety For Code LLMs Via Retrieval-Augmented Revision

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.01494: Inference-Time Safety For Code LLMs Via Retrieval-Augmented Revision

Computer Science > Software Engineering arXiv:2603.01494 (cs) [Submitted on 2 Mar 2026] Title:Inference-Time Safety For Code LLMs Via Retrieval-Augmented Revision Authors:Manisha Mukherjee, Vincent J. Hellendoorn View a PDF of the paper titled Inference-Time Safety For Code LLMs Via Retrieval-Augmented Revision, by Manisha Mukherjee and Vincent J. Hellendoorn View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are increasingly deployed for code generation in high-stakes software development, yet their limited transparency in security reasoning and brittleness to evolving vulnerability patterns raise critical trustworthiness concerns. Models trained on static datasets cannot readily adapt to newly discovered vulnerabilities or changing security standards without retraining, leading to the repeated generation of unsafe code. We present a principled approach to trustworthy code generation by design that operates as an inference-time safety mechanism. Our approach employs retrieval-augmented generation to surface relevant security risks in generated code and retrieve related security discussions from a curated Stack Overflow knowledge base, which are then used to guide an LLM during code revision. This design emphasizes three aspects relevant to trustworthiness: (1) interpretability, through transparent safety interventions grounded in expert community explanations; (2) robustness, by allowing adaptation to evolving security practices without model retraining; a...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

Llms

TRACER: Learn-to-Defer for LLM Classification with Formal Teacher-Agreement Guarantees

I'm releasing TRACER (Trace-Based Adaptive Cost-Efficient Routing), a library for learning cost-efficient routing policies from LLM trace...

Reddit - Machine Learning · 1 min ·
Mistral AI raises $830M in debt to set up a data center near Paris | TechCrunch
Llms

Mistral AI raises $830M in debt to set up a data center near Paris | TechCrunch

Mistral aims to start operating the data center by the second quarter of 2026.

TechCrunch - AI · 4 min ·
Llms

The Rationing: AI companies are using the "subsidize, addict, extract" playbook — and developers are the product

Anthropic just ran the classic platform playbook on developers: offer generous limits to build dependency, then tighten the screws once t...

Reddit - Artificial Intelligence · 1 min ·
Llms

CLI for Google AI Search (gai.google) — run AI-powered code/tech searches headlessly from your terminal

Google AI (gai.google) gives Gemini-powered answers for technical queries — think AI-enhanced search with code understanding. I built a C...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime