[2508.02515] PoeTone: A Framework for Constrained Generation of Structured Chinese Songci with LLMs

[2508.02515] PoeTone: A Framework for Constrained Generation of Structured Chinese Songci with LLMs

arXiv - Machine Learning 4 min read Article

Summary

The paper presents PoeTone, a framework for generating structured Chinese Songci poetry using large language models (LLMs), evaluating their performance through a comprehensive assessment framework.

Why It Matters

This research is significant as it explores the intersection of AI and cultural heritage, demonstrating how LLMs can be utilized to create culturally relevant literary forms. It also provides insights into the capabilities and limitations of LLMs in constrained generative tasks, which is crucial for advancing natural language processing applications.

Key Takeaways

  • PoeTone framework enables structured generation of Songci poetry.
  • Evaluation framework includes formal scores, automated assessments, and human evaluations.
  • Fine-tuning LLMs with feedback improves conformity in generated poetry.

Computer Science > Computation and Language arXiv:2508.02515 (cs) [Submitted on 4 Aug 2025 (v1), last revised 18 Feb 2026 (this version, v2)] Title:PoeTone: A Framework for Constrained Generation of Structured Chinese Songci with LLMs Authors:Zhan Qu, Shuzhou Yuan, Michael Färber View a PDF of the paper titled PoeTone: A Framework for Constrained Generation of Structured Chinese Songci with LLMs, by Zhan Qu and 2 other authors View PDF HTML (experimental) Abstract:This paper presents a systematic investigation into the constrained generation capabilities of large language models (LLMs) in producing Songci, a classical Chinese poetry form characterized by strict structural, tonal, and rhyme constraints defined by Cipai templates. We first develop a comprehensive, multi-faceted evaluation framework that includes: (i) a formal conformity score, (ii) automated quality assessment using LLMs, (iii) human evaluation, and (iv) classification-based probing tasks. Using this framework, we evaluate the generative performance of 18 LLMs, including 3 proprietary models and 15 open-source models across 4 families, under five prompting strategies: zero-shot, one-shot, completion-based, instruction-based, and chain-of-thought. Finally, we propose a Generate-Critic architecture in which the evaluation framework functions as an automated critic. Leveraging the critic's feedback as a scoring function for best-of-N selection, we fine-tune 3 lightweight open-source LLMs via supervised fine-tun...

Related Articles

Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after...

Reddit - Artificial Intelligence · 1 min ·
Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra | The Verge
Llms

Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra | The Verge

The popular combination of OpenClaw and Claude Code is being severed now that Anthropic has announced it will start charging subscribers ...

The Verge - AI · 4 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime