[2602.15781] Neural Scaling Laws for Boosted Jet Tagging

[2602.15781] Neural Scaling Laws for Boosted Jet Tagging

arXiv - Machine Learning 4 min read Article

Summary

The paper explores neural scaling laws for boosted jet tagging in high energy physics, highlighting the relationship between compute resources, model capacity, and performance limits.

Why It Matters

Understanding neural scaling laws is crucial for optimizing machine learning applications in high energy physics. This research identifies how increased compute can enhance performance, which is vital for advancing data analysis techniques in this field.

Key Takeaways

  • Scaling compute significantly impacts performance in machine learning models.
  • Data repetition in high energy physics can effectively increase dataset size.
  • More expressive input features can improve performance limits at fixed dataset sizes.
  • Optimal scaling laws can be derived for boosted jet classification.
  • The study provides insights into the asymptotic performance limits of machine learning in high energy physics.

High Energy Physics - Experiment arXiv:2602.15781 (hep-ex) [Submitted on 17 Feb 2026] Title:Neural Scaling Laws for Boosted Jet Tagging Authors:Matthias Vigl, Nicole Hartman, Michael Kagan, Lukas Heinrich View a PDF of the paper titled Neural Scaling Laws for Boosted Jet Tagging, by Matthias Vigl and 3 other authors View PDF HTML (experimental) Abstract:The success of Large Language Models (LLMs) has established that scaling compute, through joint increases in model capacity and dataset size, is the primary driver of performance in modern machine learning. While machine learning has long been an integral component of High Energy Physics (HEP) data analysis workflows, the compute used to train state-of-the-art HEP models remains orders of magnitude below that of industry foundation models. With scaling laws only beginning to be studied in the field, we investigate neural scaling laws for boosted jet classification using the public JetClass dataset. We derive compute optimal scaling laws and identify an effective performance limit that can be consistently approached through increased compute. We study how data repetition, common in HEP where simulation is expensive, modifies the scaling yielding a quantifiable effective dataset size gain. We then study how the scaling coefficients and asymptotic performance limits vary with the choice of input features and particle multiplicity, demonstrating that increased compute reliably drives performance toward an asymptotic limit, and ...

Related Articles

Llms

I compiled every major AI agent security incident from 2024-2026 in one place - 90 incidents, all sourced, updated weekly

After tracking AI agent security incidents for the past year, I put together a single reference covering every major breach, vulnerabilit...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] Forced Depth Consideration Reduces Type II Errors in LLM Self-Classification: Evidence from an Exploration Prompting Ablation Study - (200 trap prompts, 4 models, 8 Step-0 variants) [R]

LLM-Based task classifier tend to misroute prompts that look simple at first glance, but require deeper understanding - I call it "Type I...

Reddit - Machine Learning · 1 min ·
Llms

I asked ChatGPT and Gemini to generate a world map

submitted by /u/Pitiful-Entrance5769 [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

Cant wait to use Mythos model - Anthropic refuses to release Claude Mythos publicly — model found thousands of zero-days across every major OS and browser. Launches Project Glasswing with Apple, Microsoft, Google, and others for defensive use.

Anthropic announced Project Glasswing, a defensive cybersecurity initiative with Apple, Microsoft, Google, AWS, NVIDIA, CrowdStrike, and ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime