[2512.12182] TA-KAND: Two-stage Attention Triple Enhancement and U-KAN based Diffusion For Few-shot Knowledge Graph Completion

[2512.12182] TA-KAND: Two-stage Attention Triple Enhancement and U-KAN based Diffusion For Few-shot Knowledge Graph Completion

arXiv - Machine Learning 3 min read Article

Summary

The paper presents TA-KAND, a novel framework for few-shot knowledge graph completion that employs a two-stage attention mechanism and U-KAN based diffusion model, addressing the challenges of heterogeneous real-world knowledge.

Why It Matters

Knowledge graphs are essential for applications like intelligent question answering and recommendation systems. This research tackles the limitations of existing methods by focusing on the distributional characteristics of knowledge, which is crucial for improving the accuracy and efficiency of knowledge graph completion.

Key Takeaways

  • Introduces TA-KAND, a framework for enhancing knowledge graph completion.
  • Utilizes a two-stage attention mechanism to improve sample distribution handling.
  • Demonstrates significant performance improvements on public datasets.
  • Addresses the challenges posed by heterogeneous knowledge in real-world applications.
  • Highlights the importance of few-shot learning in knowledge graph contexts.

Computer Science > Artificial Intelligence arXiv:2512.12182 (cs) [Submitted on 13 Dec 2025 (v1), last revised 13 Feb 2026 (this version, v2)] Title:TA-KAND: Two-stage Attention Triple Enhancement and U-KAN based Diffusion For Few-shot Knowledge Graph Completion Authors:Xinyu Gao View a PDF of the paper titled TA-KAND: Two-stage Attention Triple Enhancement and U-KAN based Diffusion For Few-shot Knowledge Graph Completion, by Xinyu Gao View PDF HTML (experimental) Abstract:Knowledge Graphs have become fundamental infrastructure for applications such as intelligent question answering and recommender systems due to their expressive representation. Nevertheless, real-world knowledge is heterogeneous, leading to a pronounced long-tailed distribution over relations. Previous studies mainly based on metric matching or meta learning. However, they often overlook the distributional characteristics of positive and negative triple samples. In this paper, we propose a few-shot knowledge graph completion framework that integrates two-stage attention triple enhancer with U-KAN based diffusion model. Extensive experiments on two public datasets show significant advantages of our methods. Comments: Subjects: Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2512.12182 [cs.AI]   (or arXiv:2512.12182v2 [cs.AI] for this version)   https://doi.org/10.48550/arXiv.2512.12182 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Xinyu Gao [view email] ...

Related Articles

[2512.23994] PhyAVBench: A Challenging Audio Physics-Sensitivity Benchmark for Physically Grounded Text-to-Audio-Video Generation
Machine Learning

[2512.23994] PhyAVBench: A Challenging Audio Physics-Sensitivity Benchmark for Physically Grounded Text-to-Audio-Video Generation

Abstract page for arXiv paper 2512.23994: PhyAVBench: A Challenging Audio Physics-Sensitivity Benchmark for Physically Grounded Text-to-A...

arXiv - AI · 4 min ·
[2512.10785] Developing and Evaluating a Large Language Model-Based Automated Feedback System Grounded in Evidence-Centered Design for Supporting Physics Problem Solving
Llms

[2512.10785] Developing and Evaluating a Large Language Model-Based Automated Feedback System Grounded in Evidence-Centered Design for Supporting Physics Problem Solving

Abstract page for arXiv paper 2512.10785: Developing and Evaluating a Large Language Model-Based Automated Feedback System Grounded in Ev...

arXiv - AI · 4 min ·
[2510.13870] Unlocking the Potential of Diffusion Language Models through Template Infilling
Llms

[2510.13870] Unlocking the Potential of Diffusion Language Models through Template Infilling

Abstract page for arXiv paper 2510.13870: Unlocking the Potential of Diffusion Language Models through Template Infilling

arXiv - AI · 3 min ·
[2507.22418] Aleatoric Uncertainty Medical Image Segmentation Estimation via Flow Matching
Machine Learning

[2507.22418] Aleatoric Uncertainty Medical Image Segmentation Estimation via Flow Matching

Abstract page for arXiv paper 2507.22418: Aleatoric Uncertainty Medical Image Segmentation Estimation via Flow Matching

arXiv - AI · 4 min ·
More in Generative Ai: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime