[2510.07940] TTOM: Test-Time Optimization and Memorization for Compositional Video Generation
About this article
Abstract page for arXiv paper 2510.07940: TTOM: Test-Time Optimization and Memorization for Compositional Video Generation
Computer Science > Computer Vision and Pattern Recognition arXiv:2510.07940 (cs) [Submitted on 9 Oct 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:TTOM: Test-Time Optimization and Memorization for Compositional Video Generation Authors:Leigang Qu, Ziyang Wang, Na Zheng, Wenjie Wang, Liqiang Nie, Tat-Seng Chua View a PDF of the paper titled TTOM: Test-Time Optimization and Memorization for Compositional Video Generation, by Leigang Qu and 5 other authors View PDF HTML (experimental) Abstract:Video Foundation Models (VFMs) exhibit remarkable visual generation performance, but struggle in compositional scenarios (e.g., motion, numeracy, and spatial relation). In this work, we introduce Test-Time Optimization and Memorization (TTOM), a training-free framework that aligns VFM outputs with spatiotemporal layouts during inference for better text-image alignment. Rather than direct intervention to latents or attention per-sample in existing work, we integrate and optimize new parameters guided by a general layout-attention objective. Furthermore, we formulate video generation within a streaming setting, and maintain historical optimization contexts with a parametric memory mechanism that supports flexible operations, such as insert, read, update, and delete. Notably, we found that TTOM disentangles compositional world knowledge, showing powerful transferability and generalization. Experimental results on the T2V-CompBench and Vbench benchmarks establish TTOM as an ...