[2603.00846] Tiny-Critic RAG: Empowering Agentic Fallback with Parameter-Efficient Small Language Models
About this article
Abstract page for arXiv paper 2603.00846: Tiny-Critic RAG: Empowering Agentic Fallback with Parameter-Efficient Small Language Models
Computer Science > Information Retrieval arXiv:2603.00846 (cs) [Submitted on 1 Mar 2026] Title:Tiny-Critic RAG: Empowering Agentic Fallback with Parameter-Efficient Small Language Models Authors:Yichao Wu, Penghao Liang, Yafei Xiang, Mengwei Yuan, Jianan Liu, Jing Yang, Xianyou Li, Weiran Yan View a PDF of the paper titled Tiny-Critic RAG: Empowering Agentic Fallback with Parameter-Efficient Small Language Models, by Yichao Wu and 7 other authors View PDF HTML (experimental) Abstract:Retrieval-Augmented Generation (RAG) grounds Large Language Models (LLMs) to mitigate factual hallucinations. Recent paradigms shift from static pipelines to Modular and Agentic RAG frameworks, granting models autonomy for multi-hop reasoning or self-correction. However, current reflective RAG heavily relies on massive LLMs as universal evaluators. In high-throughput systems, executing complete forward passes for billion-parameter models merely for binary routing introduces severe computational redundancy. Furthermore, in autonomous agent scenarios, inaccurate retrieval causes models to expend excessive tokens on spurious reasoning and redundant tool calls, inflating Time-to-First-Token (TTFT) and costs. We propose Tiny-Critic RAG, decoupling evaluation by deploying a parameter-efficient Small Language Model (SLM) via Low-Rank Adaptation (LoRA). Acting as a deterministic gatekeeper, Tiny-Critic employs constrained decoding and non-thinking inference modes for ultra-low latency binary routing. ...