[2602.12670] SkillsBench: Benchmarking How Well Agent Skills Work Across Diverse Tasks
Summary
The paper introduces SkillsBench, a benchmark assessing the effectiveness of agent skills across 86 tasks in 11 domains, revealing significant variances in performance improvements.
Why It Matters
As AI agents become more prevalent, understanding how to effectively measure and enhance their skills is crucial. SkillsBench provides a structured approach to evaluate agent performance, which can inform future developments in AI applications across various fields.
Key Takeaways
- SkillsBench benchmarks 86 tasks across 11 domains to evaluate agent skills.
- Curated skills improve performance by an average of 16.2 percentage points, with domain-specific variations.
- Self-generated skills do not provide benefits, indicating reliance on curated knowledge.
- Focused skills outperform comprehensive documentation, suggesting efficiency in skill design.
- Smaller models with skills can achieve results comparable to larger models without skills.
Computer Science > Artificial Intelligence arXiv:2602.12670 (cs) [Submitted on 13 Feb 2026] Title:SkillsBench: Benchmarking How Well Agent Skills Work Across Diverse Tasks Authors:Xiangyi Li, Wenbo Chen, Yimin Liu, Shenghan Zheng, Xiaokun Chen, Yifeng He, Yubo Li, Bingran You, Haotian Shen, Jiankai Sun, Shuyi Wang, Qunhong Zeng, Di Wang, Xuandong Zhao, Yuanli Wang, Roey Ben Chaim, Zonglin Di, Yipeng Gao, Junwei He, Yizhuo He, Liqiang Jing, Luyang Kong, Xin Lan, Jiachen Li, Songlin Li, Yijiang Li, Yueqian Lin, Xinyi Liu, Xuanqing Liu, Haoran Lyu, Ze Ma, Bowei Wang, Runhui Wang, Tianyu Wang, Wengao Ye, Yue Zhang, Hanwen Xing, Yiqi Xue, Steven Dillmann, Han-chung Lee View a PDF of the paper titled SkillsBench: Benchmarking How Well Agent Skills Work Across Diverse Tasks, by Xiangyi Li and 39 other authors View PDF HTML (experimental) Abstract:Agent Skills are structured packages of procedural knowledge that augment LLM agents at inference time. Despite rapid adoption, there is no standard way to measure whether they actually help. We present SkillsBench, a benchmark of 86 tasks across 11 domains paired with curated Skills and deterministic verifiers. Each task is evaluated under three conditions: no Skills, curated Skills, and self-generated Skills. We test 7 agent-model configurations over 7,308 trajectories. Curated Skills raise average pass rate by 16.2 percentage points(pp), but effects vary widely by domain (+4.5pp for Software Engineering to +51.9pp for Healthcare) and ...