[2602.19548] Beyond a Single Extractor: Re-thinking HTML-to-Text Extraction for LLM Pretraining

[2602.19548] Beyond a Single Extractor: Re-thinking HTML-to-Text Extraction for LLM Pretraining

arXiv - Machine Learning 3 min read Article

Summary

This paper explores the limitations of using a single extractor for HTML-to-text conversion in LLM pretraining, proposing a union of multiple extractors to enhance token yield and downstream task performance.

Why It Matters

As large language models (LLMs) increasingly rely on web data for training, optimizing the extraction process from HTML is crucial. This research highlights the potential for improved data utilization and model performance by employing diverse extraction methods, which can lead to better outcomes in various NLP tasks.

Key Takeaways

  • Using a single extractor can lead to suboptimal data extraction from web content.
  • Employing multiple extractors can increase token yield by up to 71%.
  • The choice of extractor significantly impacts performance on structured content tasks.
  • Different extractors yield similar performance on standard tasks but vary in data coverage.
  • Improving extraction methods can enhance the effectiveness of LLMs in real-world applications.

Computer Science > Computation and Language arXiv:2602.19548 (cs) [Submitted on 23 Feb 2026] Title:Beyond a Single Extractor: Re-thinking HTML-to-Text Extraction for LLM Pretraining Authors:Jeffrey Li, Josh Gardner, Doug Kang, Fangping Shi, Karanjeet Singh, Chun-Liang Li, Herumb Shandilya, David Hall, Oncel Tuzel, Percy Liang, Ludwig Schmidt, Hadi Pour Ansari, Fartash Faghri View a PDF of the paper titled Beyond a Single Extractor: Re-thinking HTML-to-Text Extraction for LLM Pretraining, by Jeffrey Li and 12 other authors View PDF HTML (experimental) Abstract:One of the first pre-processing steps for constructing web-scale LLM pretraining datasets involves extracting text from HTML. Despite the immense diversity of web content, existing open-source datasets predominantly apply a single fixed extractor to all webpages. In this work, we investigate whether this practice leads to suboptimal coverage and utilization of Internet data. We first show that while different extractors may lead to similar model performance on standard language understanding tasks, the pages surviving a fixed filtering pipeline can differ substantially. This suggests a simple intervention: by taking a Union over different extractors, we can increase the token yield of DCLM-Baseline by up to 71% while maintaining benchmark performance. We further show that for structured content such as tables and code blocks, extractor choice can significantly impact downstream task performance, with differences of up...

Related Articles

Llms

Building knowledge bases from YouTube data using LLMs -- my workflow after 52 guides

I've been building a system that turns YouTube channels into structured knowledge bases. Thought I'd share the workflow since Karpathy's ...

Reddit - Artificial Intelligence · 1 min ·
What is AI, how do apps like ChatGPT work and why are there concerns?
Llms

What is AI, how do apps like ChatGPT work and why are there concerns?

AI is transforming modern life, but some critics worry about its potential misuse and environmental impact.

AI News - General · 7 min ·
[2603.29957] Think Anywhere in Code Generation
Llms

[2603.29957] Think Anywhere in Code Generation

Abstract page for arXiv paper 2603.29957: Think Anywhere in Code Generation

arXiv - Machine Learning · 3 min ·
[2603.16880] NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectro-Spatial Grounding and Temporal State-Space Reasoning
Llms

[2603.16880] NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectro-Spatial Grounding and Temporal State-Space Reasoning

Abstract page for arXiv paper 2603.16880: NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectr...

arXiv - Machine Learning · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime