[2504.17180] We'll Fix it in Post: Improving Text-to-Video Generation with Neuro-Symbolic Feedback
About this article
Abstract page for arXiv paper 2504.17180: We'll Fix it in Post: Improving Text-to-Video Generation with Neuro-Symbolic Feedback
Computer Science > Computer Vision and Pattern Recognition arXiv:2504.17180 (cs) [Submitted on 24 Apr 2025 (v1), last revised 31 Mar 2026 (this version, v3)] Title:We'll Fix it in Post: Improving Text-to-Video Generation with Neuro-Symbolic Feedback Authors:Minkyu Choi, S P Sharan, Harsh Goel, Sahil Shah, Sandeep Chinchali View a PDF of the paper titled We'll Fix it in Post: Improving Text-to-Video Generation with Neuro-Symbolic Feedback, by Minkyu Choi and 4 other authors View PDF HTML (experimental) Abstract:Current text-to-video (T2V) generation models are increasingly popular due to their ability to produce coherent videos from textual prompts. However, these models often struggle to generate semantically and temporally consistent videos when dealing with longer, more complex prompts involving multiple objects or sequential events. Additionally, the high computational costs associated with training or fine-tuning make direct improvements impractical. To overcome these limitations, we introduce NeuS-E, a novel zero-training video refinement pipeline that leverages neuro-symbolic feedback to automatically enhance video generation, achieving superior alignment with the prompts. Our approach first derives the neuro-symbolic feedback by analyzing a formal video representation and pinpoints semantically inconsistent events, objects, and their corresponding frames. This feedback then guides targeted edits to the original video. Extensive empirical evaluations on both open-sou...