[2603.02787] Rethinking Code Similarity for Automated Algorithm Design with LLMs
About this article
Abstract page for arXiv paper 2603.02787: Rethinking Code Similarity for Automated Algorithm Design with LLMs
Computer Science > Artificial Intelligence arXiv:2603.02787 (cs) [Submitted on 3 Mar 2026] Title:Rethinking Code Similarity for Automated Algorithm Design with LLMs Authors:Rui Zhang, Zhichao Lu View a PDF of the paper titled Rethinking Code Similarity for Automated Algorithm Design with LLMs, by Rui Zhang and 1 other authors View PDF Abstract:The rise of Large Language Model-based Automated Algorithm Design (LLM-AAD) has transformed algorithm development by autonomously generating code implementations of expert-level algorithms. Unlike traditional expert-driven algorithm development, in the LLM-AAD paradigm, the main design principle behind an algorithm is often implicitly embedded in the generated code. Therefore, assessing algorithmic similarity directly from code, distinguishing genuine algorithmic innovation from mere syntactic variation, becomes essential. While various code similarity metrics exist, they fail to capture algorithmic similarity, as they focus on surface-level syntax or output equivalence rather than the underlying algorithmic logic. We propose BehaveSim, a novel method to measure algorithmic similarity through the lens of problem-solving behavior as a sequence of intermediate solutions produced during execution, dubbed as problem-solving trajectories (PSTrajs). By quantifying the alignment between PSTrajs using dynamic time warping (DTW), BehaveSim distinguishes algorithms with divergent logic despite syntactic or output-level similarities. We demonst...