[2511.21331] The More, the Merrier: Contrastive Fusion for Higher-Order Multimodal Alignment
Abstract page for arXiv paper 2511.21331: The More, the Merrier: Contrastive Fusion for Higher-Order Multimodal Alignment
Alignment, bias, regulation, and responsible AI
Abstract page for arXiv paper 2511.21331: The More, the Merrier: Contrastive Fusion for Higher-Order Multimodal Alignment
Abstract page for arXiv paper 2509.22367: What Is The Political Content in LLMs' Pre- and Post-Training Data?
Abstract page for arXiv paper 2507.22264: SmartCLIP: Modular Vision-language Alignment with Identification Guarantees
The paper discusses machine unlearning, focusing on the privacy risks associated with undeleted data when specific data points are remove...
This article reviews the current landscape of foundation models (FMs) in medical imaging, discussing their design principles, application...
The paper introduces FlipSet, a benchmark for assessing visual perspective taking in vision-language models, revealing significant egocen...
The paper presents a novel approach to Membership Inference Attacks (MIAs) by developing an optimal attack strategy, SeMI*, leveraging mo...
This article investigates the temporal variability in the performance of the GPT-4o model, revealing significant daily and weekly pattern...
The paper presents MetaDOAR, a scalable meta-controller for solving simulation-based network security games, enhancing multi-agent reinfo...
This paper presents a framework for analyzing the vulnerabilities of Safe Reinforcement Learning (Safe RL) policies against adversarial a...
This paper explores transfer learning in linear regression using multiple pretrained models, highlighting the benefits of overparameteriz...
This survey presents the NLP-PRISM framework for identifying privacy risks in social media NLP applications, analyzing 203 peer-reviewed ...
This article reviews the role of AI in decision support, analyzing whether AI systems act as tools or collaborative teammates. It highlig...
The paper presents a Lightweight Explainable Guardrail (LEG) method for classifying unsafe prompts in AI systems, utilizing a multi-task ...
The paper presents GICDM, a method to mitigate hubness in distance-based evaluations of generative models, enhancing reliability and alig...
This article discusses the development of clinical NLP models that mitigate risks associated with temporal leakage, emphasizing the impor...
The paper explores the bias spillover effect in large language models (LLMs), revealing how targeted fairness alignment can inadvertently...
This paper presents a novel method for correcting bias in binary classification tasks using locally private examples, leveraging the Inve...
This article explores the geometric limitations of steering personality traits in large language models (LLMs), revealing that traits are...
The paper introduces the Easy Data Unlearning Bench, a unified benchmarking suite aimed at simplifying the evaluation of machine unlearni...
This article evaluates two explainability methods, Integrated Gradients and SHAP, for fault detection in chemical processes using an LSTM...
This paper explores the reliability of AI agents, proposing twelve metrics to evaluate their performance across dimensions like consisten...
This paper investigates the implicit bias of momentum-based optimizers like Adam and Muon in smooth homogeneous neural networks, extendin...
Get the latest news, tools, and insights delivered to your inbox.
Daily or weekly digest • Unsubscribe anytime