[2602.20877] E-MMKGR: A Unified Multimodal Knowledge Graph Framework for E-commerce Applications

[2602.20877] E-MMKGR: A Unified Multimodal Knowledge Graph Framework for E-commerce Applications

arXiv - AI 3 min read Article

Summary

The paper presents E-MMKGR, a unified framework for multimodal knowledge graphs tailored for e-commerce, enhancing recommendation systems and product search through improved item representation.

Why It Matters

As e-commerce continues to grow, effective recommendation systems are crucial for user engagement and sales. E-MMKGR addresses limitations in existing multimodal systems, providing a more flexible and effective approach that can adapt to various tasks, thus enhancing user experience and operational efficiency in e-commerce applications.

Key Takeaways

  • E-MMKGR improves collaborative filtering by utilizing a multimodal knowledge graph.
  • The framework enhances item representation through GNN-based propagation.
  • Experiments show significant performance improvements in recommendation and product search.
  • The approach allows for greater extensibility and generalization across tasks.
  • Real-world applications demonstrate its effectiveness in e-commerce settings.

Computer Science > Information Retrieval arXiv:2602.20877 (cs) [Submitted on 24 Feb 2026] Title:E-MMKGR: A Unified Multimodal Knowledge Graph Framework for E-commerce Applications Authors:Jiwoo Kang, Yeon-Chang Lee View a PDF of the paper titled E-MMKGR: A Unified Multimodal Knowledge Graph Framework for E-commerce Applications, by Jiwoo Kang and Yeon-Chang Lee View PDF HTML (experimental) Abstract:Multimodal recommender systems (MMRSs) enhance collaborative filtering by leveraging item-side modalities, but their reliance on a fixed set of modalities and task-specific objectives limits both modality extensibility and task generalization. We propose E-MMKGR, a framework that constructs an e-commerce-specific Multimodal Knowledge Graph E-MMKG and learns unified item representations through GNN-based propagation and KG-oriented optimization. These representations provide a shared semantic foundation applicable to diverse tasks. Experiments on real-world Amazon datasets show improvements of up to 10.18% in Recall@10 for recommendation and up to 21.72% over vector-based retrieval for product search, demonstrating the effectiveness and extensibility of our approach. Subjects: Information Retrieval (cs.IR); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.20877 [cs.IR]   (or arXiv:2602.20877v1 [cs.IR] for this version)   https://doi.org/10.48550/arXiv.2602.20877 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Jiwoo Kang [view e...

Related Articles

Llms

[R] Is autoresearch really better than classic hyperparameter tuning?

We did experiments comparing Optuna & autoresearch. Autoresearch converges faster, is more cost-efficient, and even generalizes bette...

Reddit - Machine Learning · 1 min ·
Nlp

Automate IOS devices through XCUITest with droidrun.

Automate iOS apps with XCUITest and Droidrun using just natural language. You send the command to Droidrun, and the agent starts the task...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[P] Trained a small BERT on 276K Kubernetes YAMLs using tree positional encoding instead of sequential

I trained a BERT-style transformer on 276K Kubernetes YAML files, replacing standard positional encoding with learned tree coordinates (d...

Reddit - Machine Learning · 1 min ·
Machine Learning

I am doing a multi-model graph database in pure Rust with Cypher, SQL, Gremlin, and native GNN looking for extreme speed and performance

Hi guys, I'm a PhD student in Applied AI and I've been building an embeddable graph database engine from scratch in Rust. I'd love feedba...

Reddit - Artificial Intelligence · 1 min ·
More in Nlp: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime