[2604.04131] Profile-Then-Reason: Bounded Semantic Complexity for Tool-Augmented Language Agents

[2604.04131] Profile-Then-Reason: Bounded Semantic Complexity for Tool-Augmented Language Agents

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2604.04131: Profile-Then-Reason: Bounded Semantic Complexity for Tool-Augmented Language Agents

Computer Science > Artificial Intelligence arXiv:2604.04131 (cs) [Submitted on 5 Apr 2026] Title:Profile-Then-Reason: Bounded Semantic Complexity for Tool-Augmented Language Agents Authors:Paulo Akira F. Enabe View a PDF of the paper titled Profile-Then-Reason: Bounded Semantic Complexity for Tool-Augmented Language Agents, by Paulo Akira F. Enabe View PDF HTML (experimental) Abstract:Large language model agents that use external tools are often implemented through reactive execution, in which reasoning is repeatedly recomputed after each observation, increasing latency and sensitivity to error propagation. This work introduces Profile--Then--Reason (PTR), a bounded execution framework for structured tool-augmented reasoning, in which a language model first synthesizes an explicit workflow, deterministic or guarded operators execute that workflow, a verifier evaluates the resulting trace, and repair is invoked only when the original workflow is no longer reliable. A mathematical formulation is developed in which the full pipeline is expressed as a composition of profile, routing, execution, verification, repair, and reasoning operators; under bounded repair, the number of language-model calls is restricted to two in the nominal case and three in the worst case. Experiments against a ReAct baseline on six benchmarks and four language models show that PTR achieves the pairwise exact-match advantage in 16 of 24 configurations. The results indicate that PTR is particularly eff...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

I tested the same prompt across multiple AI models… the differences surprised me

I’ve been experimenting with different AI models lately (ChatGPT, Claude, etc.), and I tried something simple: Using the exact same promp...

Reddit - Artificial Intelligence · 1 min ·
Anthropic gave Claude $100 to go shopping, here’s what the AI ended up buying
Llms

Anthropic gave Claude $100 to go shopping, here’s what the AI ended up buying

Anthropic’s AI experiment showed Claude independently handled 186 deals worth over $4,000, but results varied by model capability, with u...

AI Tools & Products · 5 min ·
CoreWeave (CRWV) Partners with Anthropic to Provide Infrastructure for Claude AI Models
Llms

CoreWeave (CRWV) Partners with Anthropic to Provide Infrastructure for Claude AI Models

CoreWeave Inc. (NASDAQ:CRWV) is one of the best technology stocks to buy for the next decade. On April 20, CoreWeave announced a multi-ye...

AI Tools & Products · 2 min ·
[2604.01650] AromaGen: Interactive Generation of Rich Olfactory Experiences with Multimodal Language Models
Llms

[2604.01650] AromaGen: Interactive Generation of Rich Olfactory Experiences with Multimodal Language Models

Abstract page for arXiv paper 2604.01650: AromaGen: Interactive Generation of Rich Olfactory Experiences with Multimodal Language Models

arXiv - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime