[2602.17952] Hardware-Friendly Input Expansion for Accelerating Function Approximation

[2602.17952] Hardware-Friendly Input Expansion for Accelerating Function Approximation

arXiv - Machine Learning 4 min read Article

Summary

This paper presents a hardware-friendly method for accelerating function approximation through input-space expansion, enhancing convergence and accuracy in neural networks.

Why It Matters

The proposed technique addresses challenges in optimizing neural networks, particularly in high-frequency function approximation. By breaking parameter symmetries, it offers a cost-effective solution that can improve performance in various scientific and engineering applications, making it relevant for researchers and practitioners in machine learning.

Key Takeaways

  • Input-space expansion can significantly accelerate training convergence.
  • The method reduces the number of iterations needed for optimization by an average of 12%.
  • Using constants like π in input expansion improves approximation accuracy, reducing MSE by 66.3%.
  • The approach maintains the original parameter count while enhancing performance.
  • Ablation studies highlight the importance of expansion dimensions and constant selection.

Computer Science > Machine Learning arXiv:2602.17952 (cs) [Submitted on 20 Feb 2026] Title:Hardware-Friendly Input Expansion for Accelerating Function Approximation Authors:Hu Lou, Yin-Jun Gao, Dong-Xiao Zhang, Tai-Jiao Du, Jun-Jie Zhang, Jia-Rui Zhang View a PDF of the paper titled Hardware-Friendly Input Expansion for Accelerating Function Approximation, by Hu Lou and 5 other authors View PDF HTML (experimental) Abstract:One-dimensional function approximation is a fundamental problem in scientific computing and engineering applications. While neural networks possess powerful universal approximation capabilities, their optimization process is often hindered by flat loss landscapes induced by parameter-space symmetries, leading to slow convergence and poor generalization, particularly for high-frequency components. Inspired by the principle of \emph{symmetry breaking} in physics, this paper proposes a hardware-friendly approach for function approximation through \emph{input-space expansion}. The core idea involves augmenting the original one-dimensional input (e.g., $x$) with constant values (e.g., $\pi$) to form a higher-dimensional vector (e.g., $[\pi, \pi, x, \pi, \pi]$), effectively breaking parameter symmetries without increasing the network's parameter count. We evaluate the method on ten representative one-dimensional functions, including smooth, discontinuous, high-frequency, and non-differentiable functions. Experimental results demonstrate that input-space expans...

Related Articles

Llms

94.42% on BANKING77 Official Test Split — New Strong 2nd Place with Lightweight Embedding + Rerank (no 7B LLM)

94.42% Accuracy on Banking77 Official Test Split BANKING77-77 is deceptively hard: 77 fine-grained banking intents, noisy real-world quer...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] Tested model routing on financial AI datasets — good savings and curious what benchmarks others use.

Ran a benchmark evaluating whether prompt complexity-based routing delivers meaningful savings. Used public HuggingFace datasets. Here's ...

Reddit - Machine Learning · 1 min ·
Llms

[D] AI research on small language models

i'm doing research on some trending fields in AI, currently working on small language models and would love to meet people who are workin...

Reddit - Machine Learning · 1 min ·
Llms

One of The Worst AI's I've Ever Seen

I'm using Gemini just for they gave us a student-free-pro pack. It can't see the images I sent, most of the time it just rewrites the mes...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime