[2407.16893] The Price of Prompting: Profiling Energy Use in Large Language Models Inference

[2407.16893] The Price of Prompting: Profiling Energy Use in Large Language Models Inference

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2407.16893: The Price of Prompting: Profiling Energy Use in Large Language Models Inference

Computer Science > Computers and Society arXiv:2407.16893 (cs) [Submitted on 4 Jul 2024 (v1), last revised 3 Mar 2026 (this version, v2)] Title:The Price of Prompting: Profiling Energy Use in Large Language Models Inference Authors:Erik Johannes Husom, Arda Goknil, Lwin Khin Shar, Sagar Sen View a PDF of the paper titled The Price of Prompting: Profiling Energy Use in Large Language Models Inference, by Erik Johannes Husom and 3 other authors View PDF HTML (experimental) Abstract:In the rapidly evolving realm of artificial intelligence, deploying large language models (LLMs) poses increasingly pressing computational and environmental challenges. This paper introduces MELODI - Monitoring Energy Levels and Optimization for Data-driven Inference - a multifaceted framework crafted to monitor and analyze the energy consumed during LLM inference processes. MELODI enables detailed observations of power consumption dynamics and facilitates the creation of a comprehensive dataset reflective of energy efficiency across varied deployment scenarios. The dataset, generated using MELODI, encompasses a broad spectrum of LLM deployment frameworks, multiple language models, and extensive prompt datasets, enabling a comparative analysis of energy use. Using the dataset, we investigate how prompt attributes, including length and complexity, correlate with energy expenditure. Our findings indicate substantial disparities in energy efficiency, suggesting ample scope for optimization and adopti...

Originally published on March 04, 2026. Curated by AI News.

Related Articles

Llms

Have Companies Began Adopting Claude Co-Work at an Enterprise Level?

Hi Guys, My company is considering purchasing the Claude Enterprise plan. The main two constraints are: - Being able to block usage of Cl...

Reddit - Artificial Intelligence · 1 min ·
Llms

What I learned about multi-agent coordination running 9 specialized Claude agents

I've been experimenting with multi-agent AI systems and ended up building something more ambitious than I originally planned: a fully ope...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] The problem with comparing AI memory system benchmarks — different evaluation methods make scores meaningless

I've been reviewing how various AI memory systems evaluate their performance and noticed a fundamental issue with cross-system comparison...

Reddit - Machine Learning · 1 min ·
Shifting to AI model customization is an architectural imperative | MIT Technology Review
Llms

Shifting to AI model customization is an architectural imperative | MIT Technology Review

In the early days of large language models (LLMs), we grew accustomed to massive 10x jumps in reasoning and coding capability with every ...

MIT Technology Review · 6 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime