[2603.22479] Cognitive Training for Language Models: Towards General Capabilities via Cross-Entropy Games
About this article
Abstract page for arXiv paper 2603.22479: Cognitive Training for Language Models: Towards General Capabilities via Cross-Entropy Games
Mathematics > Optimization and Control arXiv:2603.22479 (math) [Submitted on 23 Mar 2026] Title:Cognitive Training for Language Models: Towards General Capabilities via Cross-Entropy Games Authors:Clément Hongler, Franck Gabriel, Valentin Hartmann, Arthur Renard, Andrew Emil View a PDF of the paper titled Cognitive Training for Language Models: Towards General Capabilities via Cross-Entropy Games, by Cl\'ement Hongler and 4 other authors View PDF HTML (experimental) Abstract:Defining a constructive process to build general capabilities for language models in an automatic manner is considered an open problem in artificial intelligence. Towards this, we consider the problem of building a curriculum of tasks that grows a model via relevant skill discovery. We provide a concrete framework for this task, using a family of tasks called cross-entropy games, which we postulate is universal in a suitable sense. We show that if it is possible to grow the curriculum for relevant skill discovery by iterating a greedy optimization algorithm, then, under natural assumptions, there is essentially only one meta-objective possible (up to a few hyperparameters). We call the resulting process cognitive training. We postulate that, given sufficiently capable language models as players and meta-samplers and sufficient training time, cognitive training provides a principled way to relevant skill discovery; and hence to the extent general capabilities are achievable via greedy curriculum learnin...