[2602.03003] Open Problems in Differentiable Social Choice: Learning Mechanisms, Decisions, and Alignment

[2602.03003] Open Problems in Differentiable Social Choice: Learning Mechanisms, Decisions, and Alignment

arXiv - Machine Learning 3 min read Article

Summary

The paper discusses differentiable social choice, a framework integrating machine learning with social choice theory, identifying 18 open research problems in this emerging field.

Why It Matters

As machine learning systems increasingly incorporate social choice mechanisms, understanding how to optimize these processes is crucial for ethical and effective decision-making. This paper highlights the intersection of AI and social choice, paving the way for future research and applications.

Key Takeaways

  • Differentiable social choice formulates voting rules as learnable models.
  • The paper synthesizes existing work on auctions, decision aggregation, and preference learning.
  • Identifies 18 open problems that could shape future research agendas.
  • Highlights the implicit use of social choice mechanisms in current machine learning systems.
  • Calls for explicit normative scrutiny in the design of these systems.

Computer Science > Artificial Intelligence arXiv:2602.03003 (cs) [Submitted on 3 Feb 2026 (v1), last revised 20 Feb 2026 (this version, v3)] Title:Open Problems in Differentiable Social Choice: Learning Mechanisms, Decisions, and Alignment Authors:Zhiyu An, Wan Du View a PDF of the paper titled Open Problems in Differentiable Social Choice: Learning Mechanisms, Decisions, and Alignment, by Zhiyu An and 1 other authors View PDF HTML (experimental) Abstract:Social choice has become a foundational component of modern machine learning systems. From auctions and resource allocation to the alignment of large generative models, machine learning pipelines increasingly aggregate heterogeneous preferences and incentives into collective decisions. In effect, many contemporary machine learning systems already implement social choice mechanisms, often implicitly and without explicit normative scrutiny. This Review surveys differentiable social choice: an emerging paradigm that formulates voting rules, mechanisms, and aggregation procedures as learnable, differentiable models optimized from data. We synthesize work across auctions, decision aggregation, and preference learning, showing how classical axioms and impossibility results reappear as objectives, constraints, and optimization trade-offs. We conclude by identifying 18 open problems defining a new research agenda at the intersection of machine learning and social choice theory. Subjects: Artificial Intelligence (cs.AI); Machine L...

Related Articles

Machine Learning

[R], 31 MILLIONS High frequency data, Light GBM worked perfectly

We just published a paper on predicting adverse selection in high-frequency crypto markets using LightGBM, and I wanted to share it here ...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Those of you with 10+ years in ML — what is the public completely wrong about?

For those of you who've been in ML/AI research or applied ML for 10+ years — what's the gap between what the public thinks AI is doing vs...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

AI assistants are optimized to seem helpful. That is not the same thing as being helpful.

RLHF trains models on human feedback. Humans rate responses they like. And it turns out humans consistently rate confident, fluent, agree...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime