[2511.21331] The More, the Merrier: Contrastive Fusion for Higher-Order Multimodal Alignment
Abstract page for arXiv paper 2511.21331: The More, the Merrier: Contrastive Fusion for Higher-Order Multimodal Alignment
Alignment, bias, regulation, and responsible AI
Abstract page for arXiv paper 2511.21331: The More, the Merrier: Contrastive Fusion for Higher-Order Multimodal Alignment
Abstract page for arXiv paper 2509.22367: What Is The Political Content in LLMs' Pre- and Post-Training Data?
Abstract page for arXiv paper 2507.22264: SmartCLIP: Modular Vision-language Alignment with Identification Guarantees
The paper introduces Group-Equivariant Posterior Consistency (GEPC), a method for detecting out-of-distribution data in diffusion models ...
This article presents a novel method called PURGE for reinforcement unlearning in large language models, addressing the challenge of safe...
This paper presents a novel framework for out-of-distribution (OOD) detection in molecular complexes using diffusion models tailored for ...
This paper presents novel frameworks for communication compression in distributed learning, addressing bandwidth constraints in federated...
This paper evaluates backdoor attacks against federated learning model adaptation, focusing on the impact of Low-Rank Adaptation (LoRA) o...
The paper introduces VerifyBench, a new benchmarking framework for evaluating reference-based reward systems in large language models, hi...
This article explores the effectiveness of test-time scaling for enhancing medical reasoning in large language models, presenting the m1 ...
This article presents the Uncertain Safety Critic (USC), a novel approach to enhance safety in reinforcement learning (RL) by balancing s...
The paper presents SNAP-UQ, a novel method for single-pass uncertainty estimation in TinyML, enhancing reliability in on-device monitorin...
This article presents a novel technique for extracting safety classifiers from aligned large language models (LLMs) to address vulnerabil...
The paper presents a systematization of knowledge on data minimization in machine learning, addressing its importance in regulatory compl...
PromptGuard introduces a novel method for moderating unsafe content in text-to-image models, enhancing safety without sacrificing image q...
This paper benchmarks stochastic approximation algorithms for fairness-constrained training of deep neural networks, addressing theoretic...
This article presents a novel end-to-end model extraction method for deep neural networks, addressing limitations in existing techniques ...
The paper introduces AgentNoiseBench, a framework for evaluating the robustness of tool-using LLM agents under noisy conditions, highligh...
The article presents VERA-MH, an open-source evaluation tool designed to assess the safety of AI in mental health contexts, focusing on s...
DIAGPaper introduces a multi-agent framework for identifying and prioritizing weaknesses in scientific papers, addressing limitations of ...
The paper presents FedEFC, a novel approach to federated learning that addresses the challenges posed by noisy labels through techniques ...
This article examines the biases in time series forecasting (TSF) due to arbitrary lookback windows and channel dependence, advocating fo...
This paper introduces a method for precise control of attribute intensities in Large Language Models (LLMs) through targeted representati...
Get the latest news, tools, and insights delivered to your inbox.
Daily or weekly digest • Unsubscribe anytime