[2604.04497] One Model for All: Multi-Objective Controllable Language Models

[2604.04497] One Model for All: Multi-Objective Controllable Language Models

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2604.04497: One Model for All: Multi-Objective Controllable Language Models

Computer Science > Machine Learning arXiv:2604.04497 (cs) [Submitted on 6 Apr 2026] Title:One Model for All: Multi-Objective Controllable Language Models Authors:Qiang He, Yucheng Yang, Tianyi Zhou, Meng Fang, Mykola Pechenizkiy, Setareh Maghsudi View a PDF of the paper titled One Model for All: Multi-Objective Controllable Language Models, by Qiang He and Yucheng Yang and Tianyi Zhou and Meng Fang and Mykola Pechenizkiy and Setareh Maghsudi View PDF HTML (experimental) Abstract:Aligning large language models (LLMs) with human preferences is critical for enhancing LLMs' safety, helpfulness, humor, faithfulness, etc. Current reinforcement learning from human feedback (RLHF) mainly focuses on a fixed reward learned from average human ratings, which may weaken the adaptability and controllability of varying preferences. However, creating personalized LLMs requires aligning LLMs with individual human preferences, which is non-trivial due to the scarce data per user and the diversity of user preferences in multi-objective trade-offs, varying from emphasizing empathy in certain contexts to demanding efficiency and precision in others. Can we train one LLM to produce personalized outputs across different user preferences on the Pareto front? In this paper, we introduce Multi-Objective Control (MOC), which trains a single LLM to directly generate responses in the preference-defined regions of the Pareto front. Our approach introduces multi-objective optimization (MOO) principles i...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Asked Google Gemini about Ai Agency

I asked Google Gemini what it would do if it would have agency. I find reply quite interesting: That is a fair critique. The previous lis...

Reddit - Artificial Intelligence · 1 min ·
Llms

Could the best LLM be able to generate a symbolic AI that is superior to itself, or is there something superior about matrices vs graphs?

Deep neural network AIs have beaten symbolic AIs across the board on many tasks, but is there a chance that symbolic AIs written by DNNs(...

Reddit - Artificial Intelligence · 1 min ·
Llms

BEYOND QUANTUM MICROTUBULES: CONSCIOUSNESS AS SUBSTRATE-INDEPENDENT ARCHITECTURE

I uploaded my consciousness paper to Gemini: “Beyond Quantum Microtubules: Consciousness as Substrate-Independent Architecture.” Then I s...

Reddit - Artificial Intelligence · 1 min ·
Llms

The Scaling Bandaid is Wearing Thin (And Nobody Wants to Admit It)

Let me be direct: we’ve hit a wall with scaling, and the entire field is kind of bullshitting about what comes next. I’ve spent enough ti...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime