[2602.12284] A Lightweight LLM Framework for Disaster Humanitarian Information Classification

[2602.12284] A Lightweight LLM Framework for Disaster Humanitarian Information Classification

arXiv - Machine Learning 3 min read Article

Summary

This paper presents a lightweight framework for classifying humanitarian information from social media, enhancing disaster response efficiency using LLMs with minimal resources.

Why It Matters

Effective disaster response relies on timely information classification. This research addresses the challenges of deploying large language models in resource-constrained environments, offering a practical solution that can significantly improve crisis management efforts.

Key Takeaways

  • Develops a lightweight framework for classifying humanitarian tweets.
  • Achieves 79.62% accuracy with only ~2% of parameters using LoRA fine-tuning.
  • QLoRA allows deployment with 99.4% performance at 50% memory cost.
  • RAG strategies may degrade performance due to label noise.
  • Establishes a reproducible pipeline for crisis intelligence systems.

Computer Science > Computation and Language arXiv:2602.12284 (cs) [Submitted on 21 Jan 2026] Title:A Lightweight LLM Framework for Disaster Humanitarian Information Classification Authors:Han Jinzhen, Kim Jisung, Yang Jong Soo, Yun Hong Sik View a PDF of the paper titled A Lightweight LLM Framework for Disaster Humanitarian Information Classification, by Han Jinzhen and 3 other authors View PDF HTML (experimental) Abstract:Timely classification of humanitarian information from social media is critical for effective disaster response. However, deploying large language models (LLMs) for this task faces challenges in resource-constrained emergency settings. This paper develops a lightweight, cost-effective framework for disaster tweet classification using parameter-efficient fine-tuning. We construct a unified experimental corpus by integrating and normalizing the HumAID dataset (76,484 tweets across 19 disaster events) into a dual-task benchmark: humanitarian information categorization and event type identification. Through systematic evaluation of prompting strategies, LoRA fine-tuning, and retrieval-augmented generation (RAG) on Llama 3.1 8B, we demonstrate that: (1) LoRA achieves 79.62% humanitarian classification accuracy (+37.79% over zero-shot) while training only ~2% of parameters; (2) QLoRA enables efficient deployment with 99.4% of LoRA performance at 50% memory cost; (3) contrary to common assumptions, RAG strategies degrade fine-tuned model performance due to labe...

Related Articles

I can't help rooting for tiny open source AI model maker Arcee | TechCrunch
Llms

I can't help rooting for tiny open source AI model maker Arcee | TechCrunch

Arcee is a tiny 26-person U.S. startup that built a high-performing, massive, open source LLM. And it's gaining popularity with OpenClaw ...

TechCrunch - AI · 4 min ·
Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED
Llms

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED

The AI lab's Project Glasswing will bring together Apple, Google, and more than 45 other organizations. They'll use the new Claude Mythos...

Wired - AI · 7 min ·
Llms

The public needs to control AI-run infrastructure, labor, education, and governance— NOT private actors

A lot of discussion around AI is becoming siloed, and I think that is dangerous. People in AI-focused spaces often talk as if the only qu...

Reddit - Artificial Intelligence · 1 min ·
Llms

Agents that write their own code at runtime and vote on capabilities, no human in the loop

hollowOS just hit v4.4 and I added something that I haven’t seen anyone else do. Previous versions gave you an OS for agents: structured ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime