[2602.15362] Automated Multi-Source Debugging and Natural Language Error Explanation for Dashboard Applications

[2602.15362] Automated Multi-Source Debugging and Natural Language Error Explanation for Dashboard Applications

arXiv - AI 3 min read Article

Summary

This paper presents a novel system for Automated Multi-Source Debugging and Natural Language Error Explanation, aimed at improving user experience and reducing resolution time in complex web dashboard applications.

Why It Matters

As modern applications increasingly rely on microservices, debugging becomes more challenging. This research addresses the gap in existing tools that fail to provide clear error explanations, enhancing both developer efficiency and user satisfaction.

Key Takeaways

  • Introduces a system that correlates error data from multiple sources.
  • Utilizes Large Language Models to generate user-friendly error explanations.
  • Aims to reduce Mean Time to Resolution for support engineers.
  • Enhances user experience by translating cryptic error codes into actionable insights.
  • Addresses the limitations of current monitoring tools in debugging complex applications.

Computer Science > Software Engineering arXiv:2602.15362 (cs) [Submitted on 17 Feb 2026] Title:Automated Multi-Source Debugging and Natural Language Error Explanation for Dashboard Applications Authors:Devendra Tata, Mona Rajhans View a PDF of the paper titled Automated Multi-Source Debugging and Natural Language Error Explanation for Dashboard Applications, by Devendra Tata and 1 other authors View PDF Abstract:Modern web dashboards and enterprise applications increasingly rely on complex, distributed microservices architectures. While these architectures offer scalability, they introduce significant challenges in debugging and observability. When failures occur, they often manifest as opaque error messages to the end-user such as Something went wrong. This masks the underlying root cause which may reside in browser side exceptions, API contract violations, or server side logic failures. Existing monitoring tools capture these events in isolation but fail to correlate them effectively or provide intelligible explanations to non technical users. This paper proposes a novel system for Automated Multi Source Debugging and Natural Language Error Explanation. The proposed framework automatically collects and correlates error data from disparate sources such as browser, API, server logs and validates API contracts in real time, and utilizes Large Language Models to generate natural language explanations. This approach significantly reduces Mean Time to Resolution for support en...

Related Articles

Nlp

McKinsey's AI Lie Explains What's Happening to Work

Everyone thinks McKinsey just built 25,000 AI experts. They didn't. They took a 35-year-old internal database, put a natural language int...

Reddit - Artificial Intelligence · 1 min ·
Generative Ai

Midjourney has a new offer on the cancel page there is 20 off for 2 months

submitted by /u/RainDragonfly826 [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Walmart CEO reportedly brags that company's in-app AI agent is making people spend 35% more money
Nlp

Walmart CEO reportedly brags that company's in-app AI agent is making people spend 35% more money

AI Tools & Products · 4 min ·
Llms

[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry

Hi r/MachineLearning, I’m looking for an arXiv endorser in cs.LG for a paper on inference-time distribution shift detection for deployed ...

Reddit - Machine Learning · 1 min ·
More in Nlp: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime