[2602.23164] MetaOthello: A Controlled Study of Multiple World Models in Transformers

[2602.23164] MetaOthello: A Controlled Study of Multiple World Models in Transformers

arXiv - Machine Learning 4 min read Article

Summary

The paper presents MetaOthello, a study exploring how transformers manage multiple world models through a controlled suite of Othello variants, revealing insights into shared representation and model organization.

Why It Matters

Understanding how transformers organize multiple world models is crucial for advancing machine learning interpretability and improving model design. This research provides a framework for analyzing complex model behaviors in generative tasks, which can enhance applications in AI safety and robustness.

Key Takeaways

  • MetaOthello introduces a controlled environment for studying transformers' handling of multiple game rules.
  • Transformers trained on mixed data do not isolate their capacities but develop a shared representation across variants.
  • The findings suggest that early layers maintain general representations while later layers specialize based on game identity.

Computer Science > Machine Learning arXiv:2602.23164 (cs) [Submitted on 26 Feb 2026] Title:MetaOthello: A Controlled Study of Multiple World Models in Transformers Authors:Aviral Chawla, Galen Hall, Juniper Lovato View a PDF of the paper titled MetaOthello: A Controlled Study of Multiple World Models in Transformers, by Aviral Chawla and 2 other authors View PDF HTML (experimental) Abstract:Foundation models must handle multiple generative processes, yet mechanistic interpretability largely studies capabilities in isolation; it remains unclear how a single transformer organizes multiple, potentially conflicting "world models". Previous experiments on Othello playing neural-networks test world-model learning but focus on a single game with a single set of rules. We introduce MetaOthello, a controlled suite of Othello variants with shared syntax but different rules or tokenizations, and train small GPTs on mixed-variant data to study how multiple world models are organized in a shared representation space. We find that transformers trained on mixed-game data do not partition their capacity into isolated sub-models; instead, they converge on a mostly shared board-state representation that transfers causally across variants. Linear probes trained on one variant can intervene on another's internal state with effectiveness approaching that of matched probes. For isomorphic games with token remapping, representations are equivalent up to a single orthogonal rotation that generali...

Related Articles

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED
Llms

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED

Ads are rolling out across the US on ChatGPT’s free tier. I asked OpenAI's bot 500 questions to see what these ads were like and how they...

Wired - AI · 9 min ·
Llms

Abacus.Ai Claw LLM consumes an incredible amount of credit without any usage :(

Three days ago, I clicked the "Deploy OpenClaw In Seconds" button to get an overview of the new service, but I didn't build any automatio...

Reddit - Artificial Intelligence · 1 min ·
Google’s Gemini AI app debuts in Hong Kong
Llms

Google’s Gemini AI app debuts in Hong Kong

Tech giant’s chatbot service tops Apple’s app store chart in the city.

AI Tools & Products · 2 min ·
Google Launches Gemini Import Tools to Poach Users From Rival AI Apps
Llms

Google Launches Gemini Import Tools to Poach Users From Rival AI Apps

Anyone looking to switch their AI assistant will find it surprisingly easy, as it only takes a few steps to move from A to B. This is not...

AI Tools & Products · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime