[2602.17952] Hardware-Friendly Input Expansion for Accelerating Function Approximation
Summary
This paper presents a hardware-friendly method for accelerating function approximation through input-space expansion, enhancing convergence and accuracy in neural networks.
Why It Matters
The proposed technique addresses challenges in optimizing neural networks, particularly in high-frequency function approximation. By breaking parameter symmetries, it offers a cost-effective solution that can improve performance in various scientific and engineering applications, making it relevant for researchers and practitioners in machine learning.
Key Takeaways
- Input-space expansion can significantly accelerate training convergence.
- The method reduces the number of iterations needed for optimization by an average of 12%.
- Using constants like π in input expansion improves approximation accuracy, reducing MSE by 66.3%.
- The approach maintains the original parameter count while enhancing performance.
- Ablation studies highlight the importance of expansion dimensions and constant selection.
Computer Science > Machine Learning arXiv:2602.17952 (cs) [Submitted on 20 Feb 2026] Title:Hardware-Friendly Input Expansion for Accelerating Function Approximation Authors:Hu Lou, Yin-Jun Gao, Dong-Xiao Zhang, Tai-Jiao Du, Jun-Jie Zhang, Jia-Rui Zhang View a PDF of the paper titled Hardware-Friendly Input Expansion for Accelerating Function Approximation, by Hu Lou and 5 other authors View PDF HTML (experimental) Abstract:One-dimensional function approximation is a fundamental problem in scientific computing and engineering applications. While neural networks possess powerful universal approximation capabilities, their optimization process is often hindered by flat loss landscapes induced by parameter-space symmetries, leading to slow convergence and poor generalization, particularly for high-frequency components. Inspired by the principle of \emph{symmetry breaking} in physics, this paper proposes a hardware-friendly approach for function approximation through \emph{input-space expansion}. The core idea involves augmenting the original one-dimensional input (e.g., $x$) with constant values (e.g., $\pi$) to form a higher-dimensional vector (e.g., $[\pi, \pi, x, \pi, \pi]$), effectively breaking parameter symmetries without increasing the network's parameter count. We evaluate the method on ten representative one-dimensional functions, including smooth, discontinuous, high-frequency, and non-differentiable functions. Experimental results demonstrate that input-space expans...