[2602.15951] MadEvolve: Evolutionary Optimization of Cosmological Algorithms with Large Language Models

[2602.15951] MadEvolve: Evolutionary Optimization of Cosmological Algorithms with Large Language Models

arXiv - Machine Learning 3 min read Article

Summary

The paper presents MadEvolve, a framework for optimizing cosmological algorithms using large language models, demonstrating significant improvements in computational tasks.

Why It Matters

MadEvolve represents a novel approach to algorithm optimization in cosmology, leveraging advancements in machine learning. This work is significant as it enhances the efficiency of cosmological simulations, potentially leading to better understanding of the universe's structure and evolution.

Key Takeaways

  • MadEvolve optimizes cosmological algorithms through iterative code changes.
  • The framework supports both gradient-based and gradient-free optimization methods.
  • Significant performance improvements were observed in three cosmological tasks.
  • MadEvolve generates comparative reports on algorithm performance.
  • The code and tasks are publicly available for further research.

Astrophysics > Cosmology and Nongalactic Astrophysics arXiv:2602.15951 (astro-ph) [Submitted on 17 Feb 2026] Title:MadEvolve: Evolutionary Optimization of Cosmological Algorithms with Large Language Models Authors:Tianyi Li, Shihui Zang, Moritz Münchmeyer View a PDF of the paper titled MadEvolve: Evolutionary Optimization of Cosmological Algorithms with Large Language Models, by Tianyi Li and 2 other authors View PDF HTML (experimental) Abstract:We develop a general framework to discover scientific algorithms and apply it to three problems in computational cosmology. Our code, MadEvolve, is similar to Google's AlphaEvolve, but places a stronger emphasis on free parameters and their optimization. Our code starts with a baseline human algorithm implementation, and then optimizes its performance metrics by making iterative changes to its code. As a further convenient feature, MadEvolve automatically generates a report that compares the input algorithm with the evolved algorithm, describes the algorithmic innovations and lists the free parameters and their function. Our code supports both auto-differentiable, gradient-based parameter optimization and gradient-free optimization methods. We apply MadEvolve to the reconstruction of cosmological initial conditions, 21cm foreground contamination reconstruction and effective baryonic physics in N-body simulations. In all cases, we find substantial improvements over the base algorithm. We make MadEvolve and our three tasks publicly a...

Related Articles

Llms

main skill in software engineering in 2026 is knowing what to ask Claude, not knowing how to code. and I can’t decide if that’s depressing or just the next abstraction layer.

Been writing code professionally for 8+ years. I’m now mass spending more time describing features in plain english than writing actual c...

Reddit - Artificial Intelligence · 1 min ·
Llms

Can we even achieve AGI with LLMs, why do AI bros still believe we can?

I've heard mixed discussions around this. Although not much evidence just rhetoric from the AGI will come from LLMs camp. submitted by /u...

Reddit - Artificial Intelligence · 1 min ·
Llms

You can now prompt OpenClaw into existence. fully 1st party on top of Claude Code

OpenClaw is basically banned from Claude ¯_(ツ)_/¯ Claude Code has Telegram support.. so what if we just, made it always stay on? turns ou...

Reddit - Artificial Intelligence · 1 min ·
Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything
Llms

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything

The AI lab's Project Glasswing will bring together Apple, Google, and more than 45 other organizations. They'll use the new Claude Mythos...

Wired - AI · 7 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime