[2506.18777] Programming by Backprop: An Instruction is Worth 100 Examples When Finetuning LLMs

[2506.18777] Programming by Backprop: An Instruction is Worth 100 Examples When Finetuning LLMs

arXiv - Machine Learning 4 min read Article

Summary

The paper introduces Programming by Backprop (PBB), a novel training method for large language models (LLMs) that allows them to learn procedural knowledge from declarative instructions, enhancing sample efficiency.

Why It Matters

As LLMs increasingly integrate into various applications, understanding how to effectively finetune them is crucial. PBB offers a promising approach to improve their learning efficiency and reliability, which can significantly impact data curation and AI safety practices.

Key Takeaways

  • PBB allows LLMs to learn from declarative instructions, enhancing procedural knowledge acquisition.
  • One instruction can replace up to 100 execution examples, improving sample efficiency.
  • The method separates the learning of instruction mapping from internalizing new instructions.
  • PBB shows potential benefits over traditional homogeneous data training methods.
  • The findings have important implications for data curation and AI safety.

Computer Science > Artificial Intelligence arXiv:2506.18777 (cs) [Submitted on 23 Jun 2025 (v1), last revised 24 Feb 2026 (this version, v2)] Title:Programming by Backprop: An Instruction is Worth 100 Examples When Finetuning LLMs Authors:Jonathan Cook, Silvia Sapora, Arash Ahmadian, Akbir Khan, Tim Rocktaschel, Jakob Foerster, Laura Ruis View a PDF of the paper titled Programming by Backprop: An Instruction is Worth 100 Examples When Finetuning LLMs, by Jonathan Cook and 6 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are typically trained to acquire behaviours from demonstrations or experience, yet much of their training data is declarative: instructions, rules, and descriptions that specify behaviours without showing how to execute them. We introduce Programming by Backprop (PBB): a training regime that enables LLMs to acquire procedural knowledge (i.e., reusable behaviours) from declarative instructions encountered during training. With PBB, instructions in training data provide an opportunity to `program' specific behaviours into model weights. The core principle underpinning PBB is the separation of learning how instructions map to behaviour from internalising new instructions. We devise two distinct PBB curricula that leverage this principle. Through controlled experiments across two domains (algorithmic execution from Python source code and text generation from context-free grammars), we demonstrate the benefit of these curricula ...

Related Articles

Llms

I stopped using Claude like a chatbot — 7 prompt shifts that reclaimed 10 hours of my week

submitted by /u/ThereWas [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

What features do you actually want in an AI chatbot that nobody has built yet?

Hey everyone 👋 I'm building a new AI chat app and before I build anything I want to hear from real users first. Current AI tools like Cha...

Reddit - Artificial Intelligence · 1 min ·
Llms

So, what exactly is going on with the Claude usage limits?

I'm extremely new to AI and am building a local agent for fun. I purchased a Claude Pro account because it helped me a lot in the past wh...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why the Reddit Hate of AI?

I just went through a project where a builder wanted to build a really large building on a small lot next door. The project needed 6 vari...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime