[2603.20730] Reasoning Topology Matters: Network-of-Thought for Complex Reasoning Tasks
About this article
Abstract page for arXiv paper 2603.20730: Reasoning Topology Matters: Network-of-Thought for Complex Reasoning Tasks
Computer Science > Computation and Language arXiv:2603.20730 (cs) [Submitted on 21 Mar 2026] Title:Reasoning Topology Matters: Network-of-Thought for Complex Reasoning Tasks Authors:Fan Huang View a PDF of the paper titled Reasoning Topology Matters: Network-of-Thought for Complex Reasoning Tasks, by Fan Huang View PDF HTML (experimental) Abstract:Existing prompting paradigms structure LLM reasoning in limited topologies: Chain-of-Thought (CoT) produces linear traces, while Tree-of-Thought (ToT) performs branching search. Yet complex reasoning often requires merging intermediate results, revisiting hypotheses, and integrating evidence from multiple sources. We propose Network-of-Thought (NoT), a framework that models reasoning as a directed graph with typed nodes and edges, guided by a heuristic-based controller policy. Across four benchmarks (GSM8K, Game of 24, HotpotQA, ProofWriter) and three models (GPT-4o-mini, Llama-3.3-70B-Instruct, Qwen2.5-72B-Instruct), we investigate when network topology outperforms chain or tree structures, whether LLM-generated heuristics can guide graph-based reasoning search, and the computation-accuracy tradeoff across topologies, evaluating each method on accuracy, topology simplicity, and token efficiency. Our results show that CoT remains effective for sequential tasks with GPT-4o-mini (89.5\% on GSM8K), while NoT surpasses ToT on multi-hop reasoning (91.0\% vs.\ 88.0\% on HotpotQA with LLM-as-Judge). With 72B open-source models, NoT achi...