[2505.19764] Multi-View Encoders for Performance Prediction in LLM-Based Agentic Workflows
About this article
Abstract page for arXiv paper 2505.19764: Multi-View Encoders for Performance Prediction in LLM-Based Agentic Workflows
Computer Science > Machine Learning arXiv:2505.19764 (cs) [Submitted on 26 May 2025 (v1), last revised 27 Feb 2026 (this version, v2)] Title:Multi-View Encoders for Performance Prediction in LLM-Based Agentic Workflows Authors:Patara Trirat, Wonyong Jeong, Sung Ju Hwang View a PDF of the paper titled Multi-View Encoders for Performance Prediction in LLM-Based Agentic Workflows, by Patara Trirat and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have demonstrated remarkable capabilities across diverse tasks, but optimizing LLM-based agentic systems remains challenging due to the vast search space of agent configurations, prompting strategies, and communication patterns. Existing approaches often rely on heuristic-based tuning or exhaustive evaluation, which can be computationally expensive and suboptimal. This paper proposes Agentic Predictor, a lightweight predictor for efficient agentic workflow evaluation. Agentic Predictor is equipped with a multi-view workflow encoding technique that leverages multi-view representation learning of agentic systems by incorporating code architecture, textual prompts, and interaction graph features. To achieve high predictive accuracy while significantly reducing the number of required workflow evaluations for training a predictor, Agentic Predictor employs cross-domain unsupervised pretraining. By learning to approximate task success rates, Agentic Predictor enables fast and accurate selection of optim...