[2512.19057] Efficient Personalization of Generative Models via Optimal Experimental Design

[2512.19057] Efficient Personalization of Generative Models via Optimal Experimental Design

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a novel method for efficiently personalizing generative models using optimal experimental design to select preference queries that maximize information gain from human feedback.

Why It Matters

As generative models become increasingly integral to various applications, optimizing their alignment with user preferences is crucial. This research addresses the challenge of obtaining user feedback efficiently, potentially leading to better user experiences and more effective AI systems.

Key Takeaways

  • Introduces a method for preference query selection in generative models.
  • Utilizes optimal experimental design to enhance data efficiency.
  • Demonstrates improved personalization with fewer queries compared to random selection.
  • Presents a statistically and computationally efficient algorithm, ED-PBRL.
  • Empirical results show effectiveness in personalizing text-to-image models.

Computer Science > Machine Learning arXiv:2512.19057 (cs) [Submitted on 22 Dec 2025 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Efficient Personalization of Generative Models via Optimal Experimental Design Authors:Guy Schacht, Ziyad Sheebaelhamd, Riccardo De Santi, Mojmír Mutný, Andreas Krause View a PDF of the paper titled Efficient Personalization of Generative Models via Optimal Experimental Design, by Guy Schacht and 4 other authors View PDF HTML (experimental) Abstract:Preference learning from human feedback has the ability to align generative models with the needs of end-users. Human feedback is costly and time-consuming to obtain, which creates demand for data-efficient query selection methods. This work presents a novel approach that leverages optimal experimental design to ask humans the most informative preference queries, from which we can elucidate the latent reward function modeling user preferences efficiently. We formulate the problem of preference query selection as the one that maximizes the information about the underlying latent preference model. We show that this problem has a convex optimization formulation, and introduce a statistically and computationally efficient algorithm ED-PBRL that is supported by theoretical guarantees and can efficiently construct structured queries such as images or text. We empirically present the proposed framework by personalizing a text-to-image generative model to user-specific styles, showing that it requ...

Related Articles

Machine Learning

[R] Architecture Determines Optimization: Deriving Weight Updates from Network Topology (seeking arXiv endorsement - cs.LG)

Abstract: We derive neural network weight updates from first principles without assuming gradient descent or a specific loss function. St...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] ML project (XGBoost + Databricks + MLflow) — how to talk about “production issues” in interviews?

Hey all, I recently built an end-to-end fraud detection project using a large banking dataset: Trained an XGBoost model Used Databricks f...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] The memory chip market lost tens of billions over a paper this community would have understood in 10 minutes

TurboQuant was teased recently and tens of billions gone from memory chip market in 48 hours but anyone in this community who read the pa...

Reddit - Machine Learning · 1 min ·
Copilot is ‘for entertainment purposes only,’ according to Microsoft’s terms of use | TechCrunch
Machine Learning

Copilot is ‘for entertainment purposes only,’ according to Microsoft’s terms of use | TechCrunch

AI skeptics aren’t the only ones warning users not to unthinkingly trust models’ outputs — that’s what the AI companies say themselves in...

TechCrunch - AI · 3 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime