[2602.22368] EyeLayer: Integrating Human Attention Patterns into LLM-Based Code Summarization

[2602.22368] EyeLayer: Integrating Human Attention Patterns into LLM-Based Code Summarization

arXiv - AI 4 min read Article

Summary

The paper presents EyeLayer, a novel module that integrates human attention patterns into LLM-based code summarization, enhancing model performance by leveraging eye-tracking data.

Why It Matters

This research addresses the challenge of improving code summarization by incorporating human cognitive patterns, which could lead to more effective software comprehension tools. By bridging human expertise with AI, it opens avenues for better collaboration between developers and AI systems, ultimately enhancing software maintenance and understanding.

Key Takeaways

  • EyeLayer uses human eye-gaze patterns to improve LLM-based code summarization.
  • The module redistributes token embeddings based on learned attention parameters.
  • EyeLayer outperforms traditional fine-tuning methods, achieving up to 13.17% gains on BLEU-4 metrics.
  • The approach demonstrates the potential of integrating human cognitive data into AI models.
  • Results suggest that human gaze patterns provide valuable signals for enhancing LLM performance.

Computer Science > Software Engineering arXiv:2602.22368 (cs) [Submitted on 25 Feb 2026] Title:EyeLayer: Integrating Human Attention Patterns into LLM-Based Code Summarization Authors:Jiahao Zhang, Yifan Zhang, Kevin Leach, Yu Huang View a PDF of the paper titled EyeLayer: Integrating Human Attention Patterns into LLM-Based Code Summarization, by Jiahao Zhang and 3 other authors View PDF HTML (experimental) Abstract:Code summarization is the task of generating natural language descriptions of source code, which is critical for software comprehension and maintenance. While large language models (LLMs) have achieved remarkable progress on this task, an open question remains: can human expertise in code understanding further guide and enhance these models? We propose EyeLayer, a lightweight attention-augmentation module that incorporates human eye-gaze patterns, as a proxy of human expertise, into LLM-based code summarization. EyeLayer models human attention during code reading via a Multimodal Gaussian Mixture, redistributing token embeddings based on learned parameters (\mu_i, \sigma_i^2) that capture where and how intensively developers focus. This design enables learning generalizable attention priors from eye-tracking data and incorporating them into LLMs seamlessly, without disturbing existing representations. We evaluate EyeLayer across diverse model families (i.e., LLaMA-3.2, Qwen3, and CodeBERT) covering different scales and architectures. EyeLayer consistently outpe...

Related Articles

[2603.18532] Scaling Sim-to-Real Reinforcement Learning for Robot VLAs with Generative 3D Worlds
Llms

[2603.18532] Scaling Sim-to-Real Reinforcement Learning for Robot VLAs with Generative 3D Worlds

Abstract page for arXiv paper 2603.18532: Scaling Sim-to-Real Reinforcement Learning for Robot VLAs with Generative 3D Worlds

arXiv - Machine Learning · 4 min ·
[2603.12702] FGTR: Fine-Grained Multi-Table Retrieval via Hierarchical LLM Reasoning
Llms

[2603.12702] FGTR: Fine-Grained Multi-Table Retrieval via Hierarchical LLM Reasoning

Abstract page for arXiv paper 2603.12702: FGTR: Fine-Grained Multi-Table Retrieval via Hierarchical LLM Reasoning

arXiv - Machine Learning · 4 min ·
[2603.12681] Colluding LoRA: A Compositional Vulnerability in LLM Safety Alignment
Llms

[2603.12681] Colluding LoRA: A Compositional Vulnerability in LLM Safety Alignment

Abstract page for arXiv paper 2603.12681: Colluding LoRA: A Compositional Vulnerability in LLM Safety Alignment

arXiv - Machine Learning · 3 min ·
[2602.06098] A Theoretical Analysis of Test-Driven LLM Code Generation
Llms

[2602.06098] A Theoretical Analysis of Test-Driven LLM Code Generation

Abstract page for arXiv paper 2602.06098: A Theoretical Analysis of Test-Driven LLM Code Generation

arXiv - Machine Learning · 3 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime