[2602.20394] Selecting Optimal Variable Order in Autoregressive Ising Models
Summary
This article presents a method for selecting optimal variable orderings in autoregressive Ising models, enhancing sampling efficiency and model performance through graph-informed strategies.
Why It Matters
The study addresses a critical aspect of autoregressive models, where variable ordering significantly impacts the complexity and fidelity of generated samples. By optimizing this process, researchers can improve model accuracy and efficiency, which is vital in fields like machine learning and statistical modeling.
Key Takeaways
- Optimal variable ordering can enhance the performance of autoregressive models.
- Graph-informed strategies lead to reduced model complexity and improved sample fidelity.
- The proposed method is validated through numerical experiments on Ising models.
Statistics > Machine Learning arXiv:2602.20394 (stat) [Submitted on 23 Feb 2026] Title:Selecting Optimal Variable Order in Autoregressive Ising Models Authors:Shiba Biswal, Marc Vuffray, Andrey Y. Lokhov View a PDF of the paper titled Selecting Optimal Variable Order in Autoregressive Ising Models, by Shiba Biswal and 2 other authors View PDF HTML (experimental) Abstract:Autoregressive models enable tractable sampling from learned probability distributions, but their performance critically depends on the variable ordering used in the factorization via complexities of the resulting conditional distributions. We propose to learn the Markov random field describing the underlying data, and use the inferred graphical model structure to construct optimized variable orderings. We illustrate our approach on two-dimensional image-like models where a structure-aware ordering leads to restricted conditioning sets, thereby reducing model complexity. Numerical experiments on Ising models with discrete data demonstrate that graph-informed orderings yield higher-fidelity generated samples compared to naive variable orderings. Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG) Cite as: arXiv:2602.20394 [stat.ML] (or arXiv:2602.20394v1 [stat.ML] for this version) https://doi.org/10.48550/arXiv.2602.20394 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Shiba Biswal [view email] [v1] Mon, 23 Feb 2026 22:15:10 UTC (172 KB) Full...