[2603.23219] Decoding AI Authorship: Can LLMs Truly Mimic Human Style Across Literature and Politics?
About this article
Abstract page for arXiv paper 2603.23219: Decoding AI Authorship: Can LLMs Truly Mimic Human Style Across Literature and Politics?
Computer Science > Computation and Language arXiv:2603.23219 (cs) [Submitted on 24 Mar 2026] Title:Decoding AI Authorship: Can LLMs Truly Mimic Human Style Across Literature and Politics? Authors:Nasser A Alsadhan View a PDF of the paper titled Decoding AI Authorship: Can LLMs Truly Mimic Human Style Across Literature and Politics?, by Nasser A Alsadhan View PDF Abstract:Amidst the rising capabilities of generative AI to mimic specific human styles, this study investigates the ability of state-of-the-art large language models (LLMs), including GPT-4o, Gemini 1.5 Pro, and Claude Sonnet 3.5, to emulate the authorial signatures of prominent literary and political figures: Walt Whitman, William Wordsworth, Donald Trump, and Barack Obama. Utilizing a zero-shot prompting framework with strict thematic alignment, we generated synthetic corpora evaluated through a complementary framework combining transformer-based classification (BERT) and interpretable machine learning (XGBoost). Our methodology integrates Linguistic Inquiry and Word Count (LIWC) markers, perplexity, and readability indices to assess the divergence between AI-generated and human-authored text. Results demonstrate that AI-generated mimicry remains highly detectable, with XGBoost models trained on a restricted set of eight stylometric features achieving accuracy comparable to high-dimensional neural classifiers. Feature importance analyses identify perplexity as the primary discriminative metric, revealing a signi...