[2509.26335] TrackCore-F: Deploying Transformer-Based Subatomic Particle Tracking on FPGAs

[2509.26335] TrackCore-F: Deploying Transformer-Based Subatomic Particle Tracking on FPGAs

arXiv - Machine Learning 3 min read Article

Summary

The paper discusses TrackCore-F, a methodology for deploying Transformer-based models for subatomic particle tracking on FPGAs, highlighting challenges and preliminary results.

Why It Matters

This research is significant as it explores the integration of advanced machine learning techniques with specialized hardware, potentially enhancing the efficiency of particle tracking in high-energy physics. The findings could lead to improved data processing capabilities in experimental physics, impacting future research and technology development.

Key Takeaways

  • Transformers are increasingly used for complex tasks in high-energy physics.
  • Deploying these models on FPGAs can reduce latency and improve performance.
  • Challenges include model size constraints and the need for effective partitioning strategies.

High Energy Physics - Experiment arXiv:2509.26335 (hep-ex) [Submitted on 30 Sep 2025 (v1), last revised 16 Feb 2026 (this version, v2)] Title:TrackCore-F: Deploying Transformer-Based Subatomic Particle Tracking on FPGAs Authors:Arjan Blankestijn, Uraz Odyurt, Amirreza Yousefzadeh View a PDF of the paper titled TrackCore-F: Deploying Transformer-Based Subatomic Particle Tracking on FPGAs, by Arjan Blankestijn and 2 other authors View PDF HTML (experimental) Abstract:The Transformer Machine Learning (ML) architecture has been gaining considerable momentum in recent years. In particular, computational High-Energy Physics tasks such as jet tagging and particle track reconstruction (tracking), have either achieved proper solutions, or reached considerable milestones using Transformers. On the other hand, the use of specialised hardware accelerators, especially FPGAs, is an effective method to achieve online, or pseudo-online latencies. The development and integration of Transformer-based ML to FPGAs is still ongoing and the support from current tools is very limited or non-existent. Additionally, FPGA resources present a significant constraint. Considering the model size alone, while smaller models can be deployed directly, larger models are to be partitioned in a meaningful and ideally, automated way. We aim to develop methodologies and tools for monolithic, or partitioned Transformer synthesis, specifically targeting inference. Our primary use-case involves two machine learni...

Related Articles

Machine Learning

[For Hire] Ex-Microsoft Senior Data Engineer | Databricks, Palantir Foundry, MLOps | $55/hr

submitted by /u/mcheetirala2510 [link] [comments]

Reddit - ML Jobs · 1 min ·
Meta AI app climbs to No. 5 on the App Store after Muse Spark launch | TechCrunch
Machine Learning

Meta AI app climbs to No. 5 on the App Store after Muse Spark launch | TechCrunch

The app was ranking No. 57 on the App Store just before Meta AI's new model launched. Now it's No. 5 — and rising.

TechCrunch - AI · 4 min ·
Machine Learning

Detecting mirrored selfie images: OCR the best way? [D]

I'm trying to catch backwards "selfie" images before passing them to our VLM text reader and/or face embedding extraction. Since models l...

Reddit - Machine Learning · 1 min ·
Llms

Google’s Gemini AI can answer your questions with 3D models and simulations

submitted by /u/tekz [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime