Large Language Models

GPT, Claude, Gemini, and other LLMs

Top This Week

[2603.17839] How do LLMs Compute Verbal Confidence
Llms

[2603.17839] How do LLMs Compute Verbal Confidence

Abstract page for arXiv paper 2603.17839: How do LLMs Compute Verbal Confidence

arXiv - AI · 4 min ·
[2603.15970] 100x Cost & Latency Reduction: Performance Analysis of AI Query Approximation using Lightweight Proxy Models
Llms

[2603.15970] 100x Cost & Latency Reduction: Performance Analysis of AI Query Approximation using Lightweight Proxy Models

Abstract page for arXiv paper 2603.15970: 100x Cost & Latency Reduction: Performance Analysis of AI Query Approximation using Lightweight...

arXiv - AI · 4 min ·
[2603.10062] Multi-Agent Memory from a Computer Architecture Perspective: Visions and Challenges Ahead
Llms

[2603.10062] Multi-Agent Memory from a Computer Architecture Perspective: Visions and Challenges Ahead

Abstract page for arXiv paper 2603.10062: Multi-Agent Memory from a Computer Architecture Perspective: Visions and Challenges Ahead

arXiv - AI · 3 min ·

All Content

[2603.20314] VGS-Decoding: Visual Grounding Score Guided Decoding for Hallucination Mitigation in Medical VLMs
Llms

[2603.20314] VGS-Decoding: Visual Grounding Score Guided Decoding for Hallucination Mitigation in Medical VLMs

Abstract page for arXiv paper 2603.20314: VGS-Decoding: Visual Grounding Score Guided Decoding for Hallucination Mitigation in Medical VLMs

arXiv - Machine Learning · 3 min ·
[2603.20219] Thinking into the Future: Latent Lookahead Training for Transformers
Llms

[2603.20219] Thinking into the Future: Latent Lookahead Training for Transformers

Abstract page for arXiv paper 2603.20219: Thinking into the Future: Latent Lookahead Training for Transformers

arXiv - Machine Learning · 4 min ·
[2603.20218] An experimental study of KV cache reuse strategies in chunk-level caching systems
Llms

[2603.20218] An experimental study of KV cache reuse strategies in chunk-level caching systems

Abstract page for arXiv paper 2603.20218: An experimental study of KV cache reuse strategies in chunk-level caching systems

arXiv - Machine Learning · 3 min ·
[2603.20215] Multi-Agent Debate with Memory Masking
Llms

[2603.20215] Multi-Agent Debate with Memory Masking

Abstract page for arXiv paper 2603.20215: Multi-Agent Debate with Memory Masking

arXiv - Machine Learning · 4 min ·
[2603.20212] Fast-Slow Thinking RM: Efficient Integration of Scalar and Generative Reward Models
Llms

[2603.20212] Fast-Slow Thinking RM: Efficient Integration of Scalar and Generative Reward Models

Abstract page for arXiv paper 2603.20212: Fast-Slow Thinking RM: Efficient Integration of Scalar and Generative Reward Models

arXiv - Machine Learning · 3 min ·
[2603.20217] Expected Reward Prediction, with Applications to Model Routing
Llms

[2603.20217] Expected Reward Prediction, with Applications to Model Routing

Abstract page for arXiv paper 2603.20217: Expected Reward Prediction, with Applications to Model Routing

arXiv - Machine Learning · 4 min ·
[2603.22206] Chimera: Latency- and Performance-Aware Multi-agent Serving for Heterogeneous LLMs
Llms

[2603.22206] Chimera: Latency- and Performance-Aware Multi-agent Serving for Heterogeneous LLMs

Abstract page for arXiv paper 2603.22206: Chimera: Latency- and Performance-Aware Multi-agent Serving for Heterogeneous LLMs

arXiv - Machine Learning · 4 min ·
[2603.22184] Revisiting Quantum Code Generation: Where Should Domain Knowledge Live?
Llms

[2603.22184] Revisiting Quantum Code Generation: Where Should Domain Knowledge Live?

Abstract page for arXiv paper 2603.22184: Revisiting Quantum Code Generation: Where Should Domain Knowledge Live?

arXiv - Machine Learning · 4 min ·
[2603.22161] Causal Evidence that Language Models use Confidence to Drive Behavior
Llms

[2603.22161] Causal Evidence that Language Models use Confidence to Drive Behavior

Abstract page for arXiv paper 2603.22161: Causal Evidence that Language Models use Confidence to Drive Behavior

arXiv - Machine Learning · 4 min ·
[2603.22154] dynActivation: A Trainable Activation Family for Adaptive Nonlinearity
Llms

[2603.22154] dynActivation: A Trainable Activation Family for Adaptive Nonlinearity

Abstract page for arXiv paper 2603.22154: dynActivation: A Trainable Activation Family for Adaptive Nonlinearity

arXiv - Machine Learning · 3 min ·
[2603.22017] AdditiveLLM2: A Multi-modal Large Language Model for Additive Manufacturing
Llms

[2603.22017] AdditiveLLM2: A Multi-modal Large Language Model for Additive Manufacturing

Abstract page for arXiv paper 2603.22017: AdditiveLLM2: A Multi-modal Large Language Model for Additive Manufacturing

arXiv - Machine Learning · 3 min ·
[2603.21972] Demystifying Reinforcement Learning for Long-Horizon Tool-Using Agents: A Comprehensive Recipe
Llms

[2603.21972] Demystifying Reinforcement Learning for Long-Horizon Tool-Using Agents: A Comprehensive Recipe

Abstract page for arXiv paper 2603.21972: Demystifying Reinforcement Learning for Long-Horizon Tool-Using Agents: A Comprehensive Recipe

arXiv - Machine Learning · 4 min ·
[2603.21862] Holistic Scaling Laws for Optimal Mixture-of-Experts Architecture Optimization
Llms

[2603.21862] Holistic Scaling Laws for Optimal Mixture-of-Experts Architecture Optimization

Abstract page for arXiv paper 2603.21862: Holistic Scaling Laws for Optimal Mixture-of-Experts Architecture Optimization

arXiv - Machine Learning · 4 min ·
[2603.21705] Data-Free Layer-Adaptive Merging via Fisher Information for Long-to-Short Reasoning LLMs
Llms

[2603.21705] Data-Free Layer-Adaptive Merging via Fisher Information for Long-to-Short Reasoning LLMs

Abstract page for arXiv paper 2603.21705: Data-Free Layer-Adaptive Merging via Fisher Information for Long-to-Short Reasoning LLMs

arXiv - Machine Learning · 4 min ·
[2603.21584] SSAM: Singular Subspace Alignment for Merging Multimodal Large Language Models
Llms

[2603.21584] SSAM: Singular Subspace Alignment for Merging Multimodal Large Language Models

Abstract page for arXiv paper 2603.21584: SSAM: Singular Subspace Alignment for Merging Multimodal Large Language Models

arXiv - Machine Learning · 4 min ·
[2603.21567] Kolmogorov Complexity Bounds for LLM Steganography and a Perplexity-Based Detection Proxy
Llms

[2603.21567] Kolmogorov Complexity Bounds for LLM Steganography and a Perplexity-Based Detection Proxy

Abstract page for arXiv paper 2603.21567: Kolmogorov Complexity Bounds for LLM Steganography and a Perplexity-Based Detection Proxy

arXiv - Machine Learning · 3 min ·
[2603.21534] Generalization Limits of In-Context Operator Networks for Higher-Order Partial Differential Equations
Llms

[2603.21534] Generalization Limits of In-Context Operator Networks for Higher-Order Partial Differential Equations

Abstract page for arXiv paper 2603.21534: Generalization Limits of In-Context Operator Networks for Higher-Order Partial Differential Equ...

arXiv - Machine Learning · 3 min ·
[2603.21396] Mechanisms of Introspective Awareness
Llms

[2603.21396] Mechanisms of Introspective Awareness

Abstract page for arXiv paper 2603.21396: Mechanisms of Introspective Awareness

arXiv - Machine Learning · 3 min ·
[2603.21373] PLR: Plackett-Luce for Reordering In-Context Learning Examples
Llms

[2603.21373] PLR: Plackett-Luce for Reordering In-Context Learning Examples

Abstract page for arXiv paper 2603.21373: PLR: Plackett-Luce for Reordering In-Context Learning Examples

arXiv - Machine Learning · 3 min ·
[2603.21365] TIDE: Token-Informed Depth Execution for Per-Token Early Exit in LLM Inference
Llms

[2603.21365] TIDE: Token-Informed Depth Execution for Per-Token Early Exit in LLM Inference

Abstract page for arXiv paper 2603.21365: TIDE: Token-Informed Depth Execution for Per-Token Early Exit in LLM Inference

arXiv - Machine Learning · 4 min ·
Previous Page 49 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime