[2602.17975] Generating adversarial inputs for a graph neural network model of AC power flow
Summary
This paper presents a method for generating adversarial inputs for a graph neural network model used in AC power flow analysis, demonstrating significant errors in predictions.
Why It Matters
Understanding adversarial inputs in neural networks is crucial for ensuring the reliability and safety of AI systems, particularly in critical applications like power flow management. This research highlights the need for robust training methods and verification processes to mitigate risks associated with neural network predictions.
Key Takeaways
- The study formulates optimization problems to create adversarial inputs for a graph neural network.
- Generated adversarial points can lead to significant prediction errors in AC power flow solutions.
- Minimal perturbations can still meet adversarial constraints, indicating vulnerabilities in model robustness.
- The findings underscore the importance of developing rigorous verification methods for neural networks.
- This research contributes to the ongoing discourse on AI safety and reliability in critical systems.
Computer Science > Machine Learning arXiv:2602.17975 (cs) [Submitted on 20 Feb 2026] Title:Generating adversarial inputs for a graph neural network model of AC power flow Authors:Robert Parker View a PDF of the paper titled Generating adversarial inputs for a graph neural network model of AC power flow, by Robert Parker View PDF HTML (experimental) Abstract:This work formulates and solves optimization problems to generate input points that yield high errors between a neural network's predicted AC power flow solution and solutions to the AC power flow equations. We demonstrate this capability on an instance of the CANOS-PF graph neural network model, as implemented by the PF$\Delta$ benchmark library, operating on a 14-bus test grid. Generated adversarial points yield errors as large as 3.4 per-unit in reactive power and 0.08 per-unit in voltage magnitude. When minimizing the perturbation from a training point necessary to satisfy adversarial constraints, we find that the constraints can be met with as little as an 0.04 per-unit perturbation in voltage magnitude on a single bus. This work motivates the development of rigorous verification and robust training methods for neural network surrogate models of AC power flow. Subjects: Machine Learning (cs.LG); Systems and Control (eess.SY) Cite as: arXiv:2602.17975 [cs.LG] (or arXiv:2602.17975v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2602.17975 Focus to learn more arXiv-issued DOI via DataCite (pending regis...