[2408.14073] Score-based change point detection via tracking the best of infinitely many experts

[2408.14073] Score-based change point detection via tracking the best of infinitely many experts

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a novel algorithm for nonparametric online change point detection, utilizing a score-based approach to track the best of infinitely many experts, demonstrating effective performance on various datasets.

Why It Matters

Change point detection is crucial in various fields, including finance and environmental monitoring, as it identifies shifts in data trends. This algorithm enhances detection accuracy and efficiency, potentially improving decision-making processes in real-time applications.

Key Takeaways

  • Introduces a nonparametric algorithm for online change point detection.
  • Utilizes a score function estimation method to improve accuracy.
  • Demonstrates effectiveness through rigorous testing on real-world datasets.
  • Offers high-probability bounds for the test statistic's behavior.
  • Addresses the challenge of tracking the best expert among infinitely many.

Computer Science > Machine Learning arXiv:2408.14073 (cs) [Submitted on 26 Aug 2024 (v1), last revised 17 Feb 2026 (this version, v2)] Title:Score-based change point detection via tracking the best of infinitely many experts Authors:Anna Markovich, Nikita Puchkin View a PDF of the paper titled Score-based change point detection via tracking the best of infinitely many experts, by Anna Markovich and 1 other authors View PDF Abstract:We propose an algorithm for nonparametric online change point detection based on sequential score function estimation and the tracking the best expert approach. The core of the procedure is a version of the fixed share forecaster tailored to the case of infinite number of experts and quadratic loss functions. The algorithm shows promising results in numerical experiments on artificial and real-world data sets. Its performance is supported by rigorous high-probability bounds describing behaviour of the test statistic in the pre-change and post-change regimes. Comments: Subjects: Machine Learning (cs.LG); Methodology (stat.ME); Machine Learning (stat.ML) Cite as: arXiv:2408.14073 [cs.LG]   (or arXiv:2408.14073v2 [cs.LG] for this version)   https://doi.org/10.48550/arXiv.2408.14073 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Anna Markovich [view email] [v1] Mon, 26 Aug 2024 07:56:17 UTC (1,252 KB) [v2] Tue, 17 Feb 2026 12:41:59 UTC (603 KB) Full-text links: Access Paper: View a PDF of the paper titled Score-based chan...

Related Articles

Machine Learning

do not the stupid, keep your smarts

following my reading of a somewhat recent Wharton study on cognitive Surrender, i made a couple models go back and forth on some recursiv...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] Forced Depth Consideration Reduces Type II Errors in LLM Self-Classification: Evidence from an Exploration Prompting Ablation Study - (200 trap prompts, 4 models, 8 Step-0 variants) [R]

LLM-Based task classifier tend to misroute prompts that look simple at first glance, but require deeper understanding - I call it "Type I...

Reddit - Machine Learning · 1 min ·
Machine Learning

Anyone have an S3-compatible store that actually saturates H100s without the AWS egress tax? [R]

We’re training on a cluster in Lambda Labs, but our main dataset ( over 40TB) is sitting in AWS S3. The egress fees are high, so we tried...

Reddit - Machine Learning · 1 min ·
Machine Learning

Parax: Parametric Modeling in JAX + Equinox [P]

Hi everyone! Just wanted to share my Python project Parax - an add-on on top of the Equinox library catering for parameter-first modeling...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime