[2604.06169] In-Place Test-Time Training

[2604.06169] In-Place Test-Time Training

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2604.06169: In-Place Test-Time Training

Computer Science > Machine Learning arXiv:2604.06169 (cs) [Submitted on 7 Apr 2026] Title:In-Place Test-Time Training Authors:Guhao Feng, Shengjie Luo, Kai Hua, Ge Zhang, Di He, Wenhao Huang, Tianle Cai View a PDF of the paper titled In-Place Test-Time Training, by Guhao Feng and 6 other authors View PDF HTML (experimental) Abstract:The static ``train then deploy" paradigm fundamentally limits Large Language Models (LLMs) from dynamically adapting their weights in response to continuous streams of new information inherent in real-world tasks. Test-Time Training (TTT) offers a compelling alternative by updating a subset of model parameters (fast weights) at inference time, yet its potential in the current LLM ecosystem is hindered by critical barriers including architectural incompatibility, computational inefficiency and misaligned fast weight objectives for language modeling. In this work, we introduce In-Place Test-Time Training (In-Place TTT), a framework that seamlessly endows LLMs with Test-Time Training ability. In-Place TTT treats the final projection matrix of the ubiquitous MLP blocks as its adaptable fast weights, enabling a ``drop-in" enhancement for LLMs without costly retraining from scratch. Furthermore, we replace TTT's generic reconstruction objective with a tailored, theoretically-grounded objective explicitly aligned with the Next-Token-Prediction task governing autoregressive language modeling. This principled objective, combined with an efficient chunk-...

Originally published on April 08, 2026. Curated by AI News.

Related Articles

Llms

Text-to-image is easy. Chaining LLMs to generate, critique, and iterate on images autonomously is a routing nightmare. AgentSwarms now supports Image generation playground and creative media workflows!

Hey everyone, If you’ve been building with AI agents, you know that orchestrating text is one thing, but stepping into multimodal workflo...

Reddit - Artificial Intelligence · 1 min ·
Llms

Zoom + Claude Connector

Zoom have just launched their Claude Connector bringing a whole host of data & information into your Claude workspace. As a Claude Co...

Reddit - Artificial Intelligence · 1 min ·
Llms

Must your chatbot rat you out?

New court cases may take chatbot conversations another step away from privacy You may recall that court cases have recently held users’ c...

Reddit - Artificial Intelligence · 1 min ·
[2512.07703] PVeRA: Probabilistic Vector-Based Random Matrix Adaptation
Llms

[2512.07703] PVeRA: Probabilistic Vector-Based Random Matrix Adaptation

Abstract page for arXiv paper 2512.07703: PVeRA: Probabilistic Vector-Based Random Matrix Adaptation

arXiv - Machine Learning · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime