[2602.15239] Size Transferability of Graph Transformers with Convolutional Positional Encodings

[2602.15239] Size Transferability of Graph Transformers with Convolutional Positional Encodings

arXiv - Machine Learning 3 min read Article

Summary

This paper explores the size transferability of Graph Transformers (GTs) with convolutional positional encodings, demonstrating their ability to generalize from small to larger graphs while maintaining efficiency in real-world applications.

Why It Matters

Understanding the transferability of Graph Transformers is crucial for advancing machine learning applications in graph-structured data. This research provides theoretical insights and practical implications for training GTs effectively, which can enhance performance in various domains, including robotics and data science.

Key Takeaways

  • Graph Transformers can generalize from small to larger graphs under specific conditions.
  • The study establishes a theoretical link between GTs and Manifold Neural Networks.
  • Extensive experiments validate the scalable behavior of GTs compared to traditional GNNs.
  • GTs show practical efficiency in real-world scenarios, such as shortest path estimations.
  • This research opens new avenues for efficient training of GTs in large-scale applications.

Computer Science > Machine Learning arXiv:2602.15239 (cs) [Submitted on 16 Feb 2026] Title:Size Transferability of Graph Transformers with Convolutional Positional Encodings Authors:Javier Porras-Valenzuela, Zhiyang Wang, Alejandro Ribeiro View a PDF of the paper titled Size Transferability of Graph Transformers with Convolutional Positional Encodings, by Javier Porras-Valenzuela and 2 other authors View PDF HTML (experimental) Abstract:Transformers have achieved remarkable success across domains, motivating the rise of Graph Transformers (GTs) as attention-based architectures for graph-structured data. A key design choice in GTs is the use of Graph Neural Network (GNN)-based positional encodings to incorporate structural information. In this work, we study GTs through the lens of manifold limit models for graph sequences and establish a theoretical connection between GTs with GNN positional encodings and Manifold Neural Networks (MNNs). Building on transferability results for GNNs under manifold convergence, we show that GTs inherit transferability guarantees from their positional encodings. In particular, GTs trained on small graphs provably generalize to larger graphs under mild assumptions. We complement our theory with extensive experiments on standard graph benchmarks, demonstrating that GTs exhibit scalable behavior on par with GNNs. To further show the efficiency in a real-world scenario, we implement GTs for shortest path distance estimation over terrains to bette...

Related Articles

Llms

[P] ClaudeFormer: Building a Transformer Out of Claudes — Collaboration Request

I'm looking to work with people interested in math, machine learning, or agentic coding, on creating a multi-agent framework to do fronti...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[D] Looking for definition of open-world ish learning problem

Hello! Recently I did a project where I initially had around 30 target classes. But at inference, the model had to be able to handle a lo...

Reddit - Machine Learning · 1 min ·
Mystery Shopping Meets Machine Learning: Can Algorithms Become the Ultimate Customer Experience Auditor?
Machine Learning

Mystery Shopping Meets Machine Learning: Can Algorithms Become the Ultimate Customer Experience Auditor?

Customer expectations across Africa are shifting faster than most organisations can track. A single inconsistent interaction can ignite a...

AI News - General · 8 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime