[2602.21223] Measuring Pragmatic Influence in Large Language Model Instructions

[2602.21223] Measuring Pragmatic Influence in Large Language Model Instructions

arXiv - AI 3 min read Article

Summary

This article explores how pragmatic framing in large language model instructions influences their behavior, introducing a framework to measure this effect systematically.

Why It Matters

Understanding pragmatic framing is crucial for optimizing interactions with large language models (LLMs). This research highlights how subtle changes in prompts can significantly alter model responses, impacting applications in AI development and user interaction design.

Key Takeaways

  • Pragmatic framing can shift LLM behavior without changing task content.
  • The study introduces a framework to measure the influence of framing on directive prioritization.
  • A taxonomy categorizes 400 framing strategies into 13 types across 4 mechanisms.
  • The research demonstrates consistent shifts in model responses across different LLMs.
  • Establishing pragmatic framing as measurable aids in optimizing prompt design.

Computer Science > Computation and Language arXiv:2602.21223 (cs) [Submitted on 2 Feb 2026] Title:Measuring Pragmatic Influence in Large Language Model Instructions Authors:Yilin Geng, Omri Abend, Eduard Hovy, Lea Frermann View a PDF of the paper titled Measuring Pragmatic Influence in Large Language Model Instructions, by Yilin Geng and 3 other authors View PDF HTML (experimental) Abstract:It is not only what we ask large language models (LLMs) to do that matters, but also how we prompt. Phrases like "This is urgent" or "As your supervisor" can shift model behavior without altering task content. We study this effect as pragmatic framing, contextual cues that shape directive interpretation rather than task specification. While prior work exploits such cues for prompt optimization or probes them as security vulnerabilities, pragmatic framing itself has not been treated as a measurable property of instruction following. Measuring this influence systematically remains challenging, requiring controlled isolation of framing cues. We introduce a framework with three novel components: directive-framing decomposition separating framing context from task specification; a taxonomy organizing 400 instantiations of framing into 13 strategies across 4 mechanism clusters; and priority-based measurement that quantifies influence through observable shifts in directive prioritization. Across five LLMs of different families and sizes, influence mechanisms cause consistent and structured shi...

Related Articles

Llms

I built a Star Trek LCARS terminal that reads your entire AI coding setup

Side project that got out of hand. It's a dashboard for Claude Code that scans your ~/.claude/ directory and renders everything as a TNG ...

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] Is autoresearch really better than classic hyperparameter tuning?

We did experiments comparing Optuna & autoresearch. Autoresearch converges faster, is more cost-efficient, and even generalizes bette...

Reddit - Machine Learning · 1 min ·
Llms

Claude Source Code?

Has anyone been able to successfully download the leaked source code yet? I've not been able to find it. If anyone has, please reach out....

Reddit - Artificial Intelligence · 1 min ·
Llms

[R] Solving the Jane Street Dormant LLM Challenge: A Systematic Approach to Backdoor Discovery

Submitted by: Adam Kruger Date: March 23, 2026 Models Solved: 3/3 (M1, M2, M3) + Warmup Background When we first encountered the Jane Str...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime