[2602.12471] Tight Bounds for Logistic Regression with Large Stepsize Gradient Descent in Low Dimension

[2602.12471] Tight Bounds for Logistic Regression with Large Stepsize Gradient Descent in Low Dimension

arXiv - Machine Learning 4 min read Article

Summary

This paper presents a refined analysis of gradient descent for logistic regression in low dimensions, demonstrating improved bounds on loss reduction with large step sizes.

Why It Matters

Understanding the dynamics of gradient descent in logistic regression is crucial for optimizing machine learning models. This research provides tighter bounds, which can enhance model training efficiency and accuracy, particularly in binary classification tasks.

Key Takeaways

  • Gradient descent can achieve a loss rate of O(1/(ηT)) with a sufficiently large learning rate.
  • The analysis focuses on two-dimensional data, providing insights into oscillatory dynamics during training.
  • A lower bound for the transition from unstable to stable loss is established, matching the upper bound up to logarithmic factors.

Computer Science > Machine Learning arXiv:2602.12471 (cs) [Submitted on 12 Feb 2026] Title:Tight Bounds for Logistic Regression with Large Stepsize Gradient Descent in Low Dimension Authors:Michael Crawshaw, Mingrui Liu View a PDF of the paper titled Tight Bounds for Logistic Regression with Large Stepsize Gradient Descent in Low Dimension, by Michael Crawshaw and 1 other authors View PDF Abstract:We consider the optimization problem of minimizing the logistic loss with gradient descent to train a linear model for binary classification with separable data. With a budget of $T$ iterations, it was recently shown that an accelerated $1/T^2$ rate is possible by choosing a large step size $\eta = \Theta(\gamma^2 T)$ (where $\gamma$ is the dataset's margin) despite the resulting non-monotonicity of the loss. In this paper, we provide a tighter analysis of gradient descent for this problem when the data is two-dimensional: we show that GD with a sufficiently large learning rate $\eta$ finds a point with loss smaller than $\mathcal{O}(1/(\eta T))$, as long as $T \geq \Omega(n/\gamma + 1/\gamma^2)$, where $n$ is the dataset size. Our improved rate comes from a tighter bound on the time $\tau$ that it takes for GD to transition from unstable (non-monotonic loss) to stable (monotonic loss), via a fine-grained analysis of the oscillatory dynamics of GD in the subspace orthogonal to the max-margin classifier. We also provide a lower bound of $\tau$ matching our upper bound up to logari...

Related Articles

Open Source Ai

[D] Runtime layer on Hugging Face Transformers (no source changes) [D]

I’ve been experimenting with a runtime-layer approach to augmenting existing ML systems without modifying their source code. As a test ca...

Reddit - Machine Learning · 1 min ·
Machine Learning

Can I trick a public AI to spit out an outcome I prefer?

I am aware of an organization that evaluates proposals by feeding them into a public version of AI. Is there a way to make that AI rate m...

Reddit - Artificial Intelligence · 1 min ·
Llms

Curated 550+ free AI tools useful for building projects (LLMs, APIs, local models, RAG, agents)

Over the last few days I was collecting free or low cost AI tools that are actually useful if you want to build stuff, not just try rando...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Artificial intelligence - Machine Learning, Robotics, Algorithms

AI Events ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime