[2603.23530] Did You Forget What I Asked? Prospective Memory Failures in Large Language Models
About this article
Abstract page for arXiv paper 2603.23530: Did You Forget What I Asked? Prospective Memory Failures in Large Language Models
Computer Science > Computation and Language arXiv:2603.23530 (cs) [Submitted on 7 Mar 2026] Title:Did You Forget What I Asked? Prospective Memory Failures in Large Language Models Authors:Avni Mittal View a PDF of the paper titled Did You Forget What I Asked? Prospective Memory Failures in Large Language Models, by Avni Mittal View PDF HTML (experimental) Abstract:Large language models often fail to satisfy formatting instructions when they must simultaneously perform demanding tasks. We study this behaviour through a prospective memory inspired lens from cognitive psychology, using a controlled paradigm that combines verifiable formatting constraints with benchmark tasks of increasing complexity. Across three model families and over 8,000 prompts, compliance drops by 2-21% under concurrent task load. Vulnerability is highly type-dependent: terminal constraints (requiring action at the response boundary) degrade most, with drops up to 50%, while avoidance constraints remain comparatively robust. A salience-enhanced format (explicit instruction framing plus a trailing reminder) recovers much of the lost compliance, restoring performance to 90-100% in many settings. Interference is bidirectional: formatting constraints can also reduce task accuracy, with one model's GSM8K accuracy dropping from 93% to 27%. In additional stacking experiments, joint compliance declines sharply as constraints accumulate. All results use deterministic programmatic checkers without an LLM-as-judg...