[2604.03532] LangFIR: Discovering Sparse Language-Specific Features from Monolingual Data for Language Steering

[2604.03532] LangFIR: Discovering Sparse Language-Specific Features from Monolingual Data for Language Steering

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2604.03532: LangFIR: Discovering Sparse Language-Specific Features from Monolingual Data for Language Steering

Computer Science > Computation and Language arXiv:2604.03532 (cs) [Submitted on 4 Apr 2026] Title:LangFIR: Discovering Sparse Language-Specific Features from Monolingual Data for Language Steering Authors:Sing Hieng Wong, Hassan Sajjad, A.B. Siddique View a PDF of the paper titled LangFIR: Discovering Sparse Language-Specific Features from Monolingual Data for Language Steering, by Sing Hieng Wong and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) show strong multilingual capabilities, yet reliably controlling the language of their outputs remains difficult. Representation-level steering addresses this by adding language-specific vectors to model activations at inference time, but identifying language-specific directions in the residual stream often relies on multilingual or parallel data that can be expensive to obtain. Sparse autoencoders (SAEs) decompose residual activations into interpretable, sparse feature directions and offer a natural basis for this search, yet existing SAE-based approaches face the same data constraint. We introduce LangFIR (Language Feature Identification via Random-token Filtering), a method that discovers language-specific SAE features using only a small amount of monolingual data and random-token sequences. Many SAE features consistently activated by target-language inputs do not encode language identity. Random-token sequences surface these language-agnostic features, allowing LangFIR to filter them out and...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Could the best LLM be able to generate a symbolic AI that is superior to itself, or is there something superior about matrices vs graphs?

Deep neural network AIs have beaten symbolic AIs across the board on many tasks, but is there a chance that symbolic AIs written by DNNs(...

Reddit - Artificial Intelligence · 1 min ·
Llms

BEYOND QUANTUM MICROTUBULES: CONSCIOUSNESS AS SUBSTRATE-INDEPENDENT ARCHITECTURE

I uploaded my consciousness paper to Gemini: “Beyond Quantum Microtubules: Consciousness as Substrate-Independent Architecture.” Then I s...

Reddit - Artificial Intelligence · 1 min ·
Llms

The Scaling Bandaid is Wearing Thin (And Nobody Wants to Admit It)

Let me be direct: we’ve hit a wall with scaling, and the entire field is kind of bullshitting about what comes next. I’ve spent enough ti...

Reddit - Artificial Intelligence · 1 min ·
Llms

Moving Past "LLM Vibes" toward Structural Enforcement in AI Agents

We need to address the structural failure currently happening in the AI agent space: too many people are building a beautiful "pedestal" ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime