[2602.23407] Learning to Generate Secure Code via Token-Level Rewards

[2602.23407] Learning to Generate Secure Code via Token-Level Rewards

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2602.23407: Learning to Generate Secure Code via Token-Level Rewards

Computer Science > Cryptography and Security arXiv:2602.23407 (cs) [Submitted on 26 Feb 2026] Title:Learning to Generate Secure Code via Token-Level Rewards Authors:Jiazheng Quan, Xiaodong Li, Bin Wang, Guo An, Like Liu, Degen Huang, Lin Liu, Chengbin Hou View a PDF of the paper titled Learning to Generate Secure Code via Token-Level Rewards, by Jiazheng Quan and 7 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have demonstrated strong capabilities in code generation, yet they remain prone to producing security vulnerabilities. Existing approaches commonly suffer from two key limitations: the scarcity of high-quality security data and coarse-grained reinforcement learning reward signals. To address these challenges, we propose Vul2Safe, a new secure code generation framework that leverages LLM self-reflection to construct high-confidence repair pairs from real-world vulnerabilities, and further generates diverse implicit prompts to build the PrimeVul+ dataset. Meanwhile, we introduce SRCode, a novel training framework that pioneers the use of token-level rewards in reinforcement learning for code security, which enables the model to continuously attend to and reinforce critical fine-grained security patterns during training. Compared with traditional instance-level reward schemes, our approach allows for more precise optimization of local security implementations. Extensive experiments show that PrimeVul+ and SRCode substantially reduce se...

Originally published on March 02, 2026. Curated by AI News.

Related Articles

Llms

[D] We reimplemented Claude Code entirely in Python — open source, works with local models

Hey everyone, We just released Claw Code Agent — a full Python reimplementation of the Claude Code agent architecture, based on the rever...

Reddit - Machine Learning · 1 min ·
Llms

[D] Production gaps in context-window compression for AI agent memory

've been working on AI memory infrastructure and recently spent a few weeks reading through the source code of an open-source context-win...

Reddit - Machine Learning · 1 min ·
Llms

How Claude Web tried to break out its container, provided all files on the system, scanned the networks, etc

Originally wasn't going to write about this - on one hand thought it's prolly already known, on the other hand I didn't feel like it was ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Combining the robot operating system with LLMs for natural-language control

Over the past few decades, robotics researchers have developed a wide range of increasingly advanced robots that can autonomously complet...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime