[2602.19330] CTS-Bench: Benchmarking Graph Coarsening Trade-offs for GNNs in Clock Tree Synthesis

[2602.19330] CTS-Bench: Benchmarking Graph Coarsening Trade-offs for GNNs in Clock Tree Synthesis

arXiv - Machine Learning 4 min read Article

Summary

The paper introduces CTS-Bench, a benchmark suite for evaluating graph coarsening trade-offs in Graph Neural Networks (GNNs) for Clock Tree Synthesis (CTS), highlighting accuracy-efficiency challenges.

Why It Matters

As GNNs gain traction in Electronic Design Automation, understanding the effects of graph coarsening on performance is crucial for optimizing clock distribution models. CTS-Bench offers a systematic approach to evaluate these trade-offs, which is significant for advancing GNN applications in physical design.

Key Takeaways

  • CTS-Bench provides a comprehensive benchmark for assessing GNN performance in CTS tasks.
  • Graph coarsening can significantly reduce memory usage and training time but may compromise prediction accuracy.
  • The study reveals a critical trade-off between computational efficiency and structural information retention.
  • Generic graph clustering techniques may negatively impact CTS learning objectives despite unchanged global metrics.
  • CTS-Bench facilitates the development of optimized GNN architectures under realistic design constraints.

Computer Science > Machine Learning arXiv:2602.19330 (cs) [Submitted on 22 Feb 2026] Title:CTS-Bench: Benchmarking Graph Coarsening Trade-offs for GNNs in Clock Tree Synthesis Authors:Barsat Khadka, Kawsher Roxy, Md Rubel Ahmed View a PDF of the paper titled CTS-Bench: Benchmarking Graph Coarsening Trade-offs for GNNs in Clock Tree Synthesis, by Barsat Khadka and 2 other authors View PDF HTML (experimental) Abstract:Graph Neural Networks (GNNs) are increasingly explored for physical design analysis in Electronic Design Automation, particularly for modeling Clock Tree Synthesis behavior such as clock skew and buffering complexity. However, practical deployment remains limited due to the prohibitive memory and runtime cost of operating on raw gate-level netlists. Graph coarsening is commonly used to improve scalability, yet its impact on CTS-critical learning objectives is not well characterized. This paper introduces CTS-Bench, a benchmark suite for systematically evaluating the trade-offs between graph coarsening, prediction accuracy, and computational efficiency in GNN-based CTS analysis. CTS-Bench consists of 4,860 converged physical design solutions spanning five architectures and provides paired raw gate-level and clustered graph representations derived from post-placement designs. Using clock skew prediction as a representative CTS task, we demonstrate a clear accuracy-efficiency trade-off. While graph coarsening reduces GPU memory usage by up to 17.2x and accelerates...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
Machine Learning

[P] SpeakFlow - AI Dialogue Practice Coach with GLM 5.1

Built SpeakFlow for the Z.AI Builder Series hackathon. AI dialogue practice coach that evaluates your spoken responses in real-time. Two ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] ICML Anonymized git repos for rebuttal

A number of the papers I'm reviewing for have submitted additional figures and code through anonymized git repos (e.g. https://anonymous....

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime