[2603.27006] The Last Fingerprint: How Markdown Training Shapes LLM Prose
About this article
Abstract page for arXiv paper 2603.27006: The Last Fingerprint: How Markdown Training Shapes LLM Prose
Computer Science > Computation and Language arXiv:2603.27006 (cs) [Submitted on 27 Mar 2026] Title:The Last Fingerprint: How Markdown Training Shapes LLM Prose Authors:E. M. Freeburg View a PDF of the paper titled The Last Fingerprint: How Markdown Training Shapes LLM Prose, by E. M. Freeburg View PDF HTML (experimental) Abstract:Large language models produce em dashes at varying rates, and the observation that some models "overuse" them has become one of the most widely discussed markers of AI-generated text. Yet no mechanistic account of this pattern exists, and the parallel observation that LLMs default to markdown-formatted output has never been connected to it. We propose that the em dash is markdown leaking into prose -- the smallest surviving unit of the structural orientation that LLMs acquire from markdown-saturated training corpora. We present a five-step genealogy connecting training data composition, structural internalization, the dual-register status of the em dash, and post-training amplification. We test this with a two-condition suppression experiment across twelve models from five providers (Anthropic, OpenAI, Meta, Google, DeepSeek): when models are instructed to avoid markdown formatting, overt features (headers, bullets, bold) are eliminated or nearly eliminated, but em dashes persist -- except in Meta's Llama models, which produce none at all. Em dash frequency and suppression resistance vary from 0.0 per 1,000 words (Llama) to 9.1 (GPT-4.1 under supp...