[2602.20696] PromptCD: Test-Time Behavior Enhancement via Polarity-Prompt Contrastive Decoding

[2602.20696] PromptCD: Test-Time Behavior Enhancement via Polarity-Prompt Contrastive Decoding

arXiv - AI 4 min read Article

Summary

The paper presents PromptCD, a method for enhancing AI behavior at test time using polarity-prompt contrastive decoding, improving alignment with human values without additional training.

Why It Matters

As AI systems increasingly influence daily life, ensuring they align with human preferences is crucial. PromptCD offers a cost-effective and efficient way to enhance AI behavior post-training, addressing the limitations of existing alignment methods that require extensive resources.

Key Takeaways

  • PromptCD enhances AI behavior at test time without retraining.
  • It uses polarity prompts to contrast model responses for better alignment.
  • Demonstrated improvements in helpfulness, honesty, and harmlessness for LLMs.
  • Applicable to both LLMs and Vision-Language Models (VLMs).
  • Offers a general and cost-efficient strategy for reliable behavior control.

Computer Science > Artificial Intelligence arXiv:2602.20696 (cs) [Submitted on 24 Feb 2026] Title:PromptCD: Test-Time Behavior Enhancement via Polarity-Prompt Contrastive Decoding Authors:Baolong Bi, Yuyao Ge, Shenghua Liu, Yuchen He, Siqian Tong, Lizhe Chen, Lingrui Mei, Zehao Li, Yiwei Wang, Yujun Cai, Ming-Hsuan Yang, Xueqi Cheng View a PDF of the paper titled PromptCD: Test-Time Behavior Enhancement via Polarity-Prompt Contrastive Decoding, by Baolong Bi and 11 other authors View PDF Abstract:Reliable AI systems require large language models (LLMs) to exhibit behaviors aligned with human preferences and values. However, most existing alignment approaches operate at training time and rely on additional high-quality data, incurring significant computational and annotation costs. While recent work has shown that contrastive decoding can leverage a model's internal distributions to improve specific capabilities, its applicability remains limited to narrow behavioral scopes and scenarios. In this work, we introduce Polarity-Prompt Contrastive Decoding (PromptCD), a test-time behavior control method that generalizes contrastive decoding to broader enhancement settings. PromptCD constructs paired positive and negative guiding prompts for a target behavior and contrasts model responses-specifically token-level probability distributions in LLMs and visual attention patterns in VLMs-to reinforce desirable outcomes. This formulation extends contrastive decoding to a wide range of...

Related Articles

What is AI, how do apps like ChatGPT work and why are there concerns?
Llms

What is AI, how do apps like ChatGPT work and why are there concerns?

AI is transforming modern life, but some critics worry about its potential misuse and environmental impact.

AI News - General · 7 min ·
[2603.29957] Think Anywhere in Code Generation
Llms

[2603.29957] Think Anywhere in Code Generation

Abstract page for arXiv paper 2603.29957: Think Anywhere in Code Generation

arXiv - Machine Learning · 3 min ·
[2603.16880] NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectro-Spatial Grounding and Temporal State-Space Reasoning
Llms

[2603.16880] NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectro-Spatial Grounding and Temporal State-Space Reasoning

Abstract page for arXiv paper 2603.16880: NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectr...

arXiv - Machine Learning · 4 min ·
[2512.21106] Semantic Refinement with LLMs for Graph Representations
Llms

[2512.21106] Semantic Refinement with LLMs for Graph Representations

Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations

arXiv - Machine Learning · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime