[2503.10666] Green Prompting: Characterizing Prompt-driven Energy Costs of LLM Inference

[2503.10666] Green Prompting: Characterizing Prompt-driven Energy Costs of LLM Inference

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2503.10666: Green Prompting: Characterizing Prompt-driven Energy Costs of LLM Inference

Computer Science > Computation and Language arXiv:2503.10666 (cs) [Submitted on 9 Mar 2025 (v1), last revised 1 Apr 2026 (this version, v3)] Title:Green Prompting: Characterizing Prompt-driven Energy Costs of LLM Inference Authors:Marta Adamska, Daria Smirnova, Hamid Nasiri, Zhengxin Yu, Peter Garraghan View a PDF of the paper titled Green Prompting: Characterizing Prompt-driven Energy Costs of LLM Inference, by Marta Adamska and 4 other authors View PDF Abstract:Large Language Models (LLMs) have become widely used across various domains spanning search engines, code generation, and text creation. However, a major concern associated with their adoption is the high cost of inference, impacting both their sustainability and financial feasibility. In this study, we empirically study how different prompt and response characteristics directly impact LLM inference energy cost. We conduct experiments leveraging three open-source transformer-based LLMs across three task types$-$question answering, sentiment analysis, and text generation. For each inference, we analyzed prompt and response characteristics (length, semantic meaning, time taken, energy consumption). Our results demonstrate that even when presented with identical tasks, models generate responses with varying characteristics and subsequently exhibit distinct energy consumption patterns. We found that prompt length is less significant than the semantic meaning of the task itself. In addition, we identified specific keyw...

Originally published on April 03, 2026. Curated by AI News.

Related Articles

Llms

Intel LLM-Scaler vllm-0.14.0-b8.2 released with official Arc Pro B70 support

submitted by /u/Fcking_Chuck [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

What was the biggest thing to happen in the field of AI?

I personally think it’s either AlphaGo or ChatGPT. AlphaGo showed to the whole world that AIs can be better than its creators in an area ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Training-time intervention yields 63.4% blind-pair human preference at matched val-loss (1.2B params, 320 judgments, p = 1.98 × 10⁻⁵) [R]

TL;DR. I ran a blind A/B preference evaluation between two 1.2B-parameter LMs trained on identical data (same order, same seed, 30K steps...

Reddit - Machine Learning · 1 min ·
I tried Gemini, ChatGPT, and Claude for a month on Android, and I have a clear winner for you
Llms

I tried Gemini, ChatGPT, and Claude for a month on Android, and I have a clear winner for you

The ultimate Android AI showdown

AI Tools & Products · 5 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime