[2501.05675] Synergizing Large Language Models and Task-specific Models for Time Series Anomaly Detection
About this article
Abstract page for arXiv paper 2501.05675: Synergizing Large Language Models and Task-specific Models for Time Series Anomaly Detection
Computer Science > Artificial Intelligence arXiv:2501.05675 (cs) [Submitted on 10 Jan 2025 (v1), last revised 30 Mar 2026 (this version, v5)] Title:Synergizing Large Language Models and Task-specific Models for Time Series Anomaly Detection Authors:Feiyi Chen, Leilei Zhang, Guansong Pang, Roger Zimmermann, Shuiguang Deng View a PDF of the paper titled Synergizing Large Language Models and Task-specific Models for Time Series Anomaly Detection, by Feiyi Chen and 4 other authors View PDF HTML (experimental) Abstract:In anomaly detection, methods based on large language models (LLMs) can incorporate expert knowledge by reading professional document, while task-specific small models excel at extracting normal data patterns and detecting value fluctuations from training data of target applications. Inspired by the human nervous system, where the brain stores expert knowledge and the peripheral nervous system and spinal cord handle specific tasks like withdrawal and knee-jerk reflexes, we propose CoLLaTe, a framework designed to facilitate collaboration between LLMs and task-specific models, leveraging the strengths of both models for anomaly detection. In particular, we first formulate the collaboration process and identify two key challenges in the collaboration: (1) the misalignment between the expression domains of the LLMs and task-specific small models, and (2) error accumulation arising from the predictions of both models. To address these challenges, we then introduce tw...