[2503.10666] Green Prompting: Characterizing Prompt-driven Energy Costs of LLM Inference
About this article
Abstract page for arXiv paper 2503.10666: Green Prompting: Characterizing Prompt-driven Energy Costs of LLM Inference
Computer Science > Computation and Language arXiv:2503.10666 (cs) [Submitted on 9 Mar 2025 (v1), last revised 1 Apr 2026 (this version, v3)] Title:Green Prompting: Characterizing Prompt-driven Energy Costs of LLM Inference Authors:Marta Adamska, Daria Smirnova, Hamid Nasiri, Zhengxin Yu, Peter Garraghan View a PDF of the paper titled Green Prompting: Characterizing Prompt-driven Energy Costs of LLM Inference, by Marta Adamska and 4 other authors View PDF Abstract:Large Language Models (LLMs) have become widely used across various domains spanning search engines, code generation, and text creation. However, a major concern associated with their adoption is the high cost of inference, impacting both their sustainability and financial feasibility. In this study, we empirically study how different prompt and response characteristics directly impact LLM inference energy cost. We conduct experiments leveraging three open-source transformer-based LLMs across three task types$-$question answering, sentiment analysis, and text generation. For each inference, we analyzed prompt and response characteristics (length, semantic meaning, time taken, energy consumption). Our results demonstrate that even when presented with identical tasks, models generate responses with varying characteristics and subsequently exhibit distinct energy consumption patterns. We found that prompt length is less significant than the semantic meaning of the task itself. In addition, we identified specific keyw...