[2511.00040] Semi-Supervised Preference Optimization with Limited Feedback

[2511.00040] Semi-Supervised Preference Optimization with Limited Feedback

arXiv - AI 3 min read Article

Summary

This paper discusses Semi-Supervised Preference Optimization (SSPO), which reduces the need for extensive labeled feedback in preference optimization for language models by utilizing unpaired data effectively.

Why It Matters

The research addresses a significant challenge in machine learning—reducing the resource burden of acquiring labeled data. By demonstrating that SSPO can achieve high performance with minimal labeled samples, this work has implications for making preference optimization more accessible and efficient in AI applications.

Key Takeaways

  • SSPO learns from a small number of pairwise preference labels and a large pool of unpaired samples.
  • The study proves the existence of an optimal reward threshold for effective pseudo-labeling.
  • SSPO shows improved data efficiency, outperforming traditional methods with less labeled data.
  • The method maintains human alignment while significantly reducing acquisition costs.
  • Extensive experiments validate SSPO's effectiveness across various datasets.

Computer Science > Machine Learning arXiv:2511.00040 (cs) [Submitted on 28 Oct 2025 (v1), last revised 19 Feb 2026 (this version, v3)] Title:Semi-Supervised Preference Optimization with Limited Feedback Authors:Seonggyun Lee, Sungjun Lim, Seojin Park, Soeun Cheon, Kyungwoo Song View a PDF of the paper titled Semi-Supervised Preference Optimization with Limited Feedback, by Seonggyun Lee and 4 other authors View PDF HTML (experimental) Abstract:The field of preference optimization has made outstanding contributions to the alignment of language models with human preferences. Despite these advancements, recent methods still rely heavily on substantial paired (labeled) feedback data, leading to substantial resource expenditures. To address these challenges, we study the problem of Semi-Supervised Preference Optimization (SSPO) in which the idea is to learn from both a small number of pairwise preference labels and a large pool of unpaired samples simultaneously. Our key theoretical contribution proves the existence of an optimal reward threshold capable of separating winning and losing responses with high probability, which enables a principled pseudo-labeling of unpaired data. By leveraging these pseudo-labels, SSPO effectively distills latent preferences from large-scale unpaired data, thus maintaining human alignment while drastically reducing acquisition costs. Extensive experiments across datasets validate this remarkable data efficiency; for instance, SSPO trained with M...

Related Articles

Llms

[R] Reference model free behavioral discovery of AudiBench model organisms via Probe-Mediated Adaptive Auditing

Anthropic's AuditBench - 56 Llama 3.3 70B models with planted hidden behaviors - their best agent detects the behaviros 10-13% of the tim...

Reddit - Machine Learning · 1 min ·
Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
Llms

I have been coding for 11 years and I caught myself completely unable to debug a problem without AI assistance last month. That scared me more than anything I have seen in this industry.

I want to be honest about something that happened to me because I think it is more common than people admit. Last month I hit a bug in a ...

Reddit - Artificial Intelligence · 1 min ·
Llms

OpenClaw security checklist: practical safeguards for AI agents

Here is one of the better quality guides on the ensuring safety when deploying OpenClaw: https://chatgptguide.ai/openclaw-security-checkl...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime