Large Language Models

GPT, Claude, Gemini, and other LLMs

Top This Week

AI Can Now Prove Itself: Steerco Launches MCP, Brings Traceable Client Comms Inside ChatGPT, Claude, & Gemini (Free)
Llms

AI Can Now Prove Itself: Steerco Launches MCP, Brings Traceable Client Comms Inside ChatGPT, Claude, & Gemini (Free)

Steerco, the Traceable AI category leader for enterprise client communications, announced the general availability of its Model Context P...

AI Tools & Products · 5 min ·
Not ChatGPT, Not Claude: This AI Platform Ranks Highest For Customer Satisfaction In 2026
Llms

Not ChatGPT, Not Claude: This AI Platform Ranks Highest For Customer Satisfaction In 2026

Google Gemini ranked highest for customer satisfaction in ACSI’s 2026 AI study, beating Copilot, ChatGPT, Claude, Grok, and Perplexity.

AI Tools & Products · 3 min ·
Minty Is the First AI-powered Cashback and Rewards Companion to Bring Real-time Cashback Offers to ChatGPT
Llms

Minty Is the First AI-powered Cashback and Rewards Companion to Bring Real-time Cashback Offers to ChatGPT

Minty becomes the first AI-powered cashback companion on ChatGPT, delivering real-time deals and rewards directly within conversations as...

AI Tools & Products · 4 min ·

All Content

[2603.04413] Simulating Meaning, Nevermore! Introducing ICR: A Semiotic-Hermeneutic Metric for Evaluating Meaning in LLM Text Summaries
Llms

[2603.04413] Simulating Meaning, Nevermore! Introducing ICR: A Semiotic-Hermeneutic Metric for Evaluating Meaning in LLM Text Summaries

Abstract page for arXiv paper 2603.04413: Simulating Meaning, Nevermore! Introducing ICR: A Semiotic-Hermeneutic Metric for Evaluating Me...

arXiv - AI · 4 min ·
[2603.04411] One Size Does Not Fit All: Token-Wise Adaptive Compression for KV Cache
Llms

[2603.04411] One Size Does Not Fit All: Token-Wise Adaptive Compression for KV Cache

Abstract page for arXiv paper 2603.04411: One Size Does Not Fit All: Token-Wise Adaptive Compression for KV Cache

arXiv - Machine Learning · 3 min ·
[2603.04410] SalamahBench: Toward Standardized Safety Evaluation for Arabic Language Models
Llms

[2603.04410] SalamahBench: Toward Standardized Safety Evaluation for Arabic Language Models

Abstract page for arXiv paper 2603.04410: SalamahBench: Toward Standardized Safety Evaluation for Arabic Language Models

arXiv - AI · 4 min ·
[2603.04409] Unpacking Human Preference for LLMs: Demographically Aware Evaluation with the HUMAINE Framework
Llms

[2603.04409] Unpacking Human Preference for LLMs: Demographically Aware Evaluation with the HUMAINE Framework

Abstract page for arXiv paper 2603.04409: Unpacking Human Preference for LLMs: Demographically Aware Evaluation with the HUMAINE Framework

arXiv - AI · 4 min ·
[2603.04406] CTRL-RAG: Contrastive Likelihood Reward Based Reinforcement Learning for Context-Faithful RAG Models
Llms

[2603.04406] CTRL-RAG: Contrastive Likelihood Reward Based Reinforcement Learning for Context-Faithful RAG Models

Abstract page for arXiv paper 2603.04406: CTRL-RAG: Contrastive Likelihood Reward Based Reinforcement Learning for Context-Faithful RAG M...

arXiv - AI · 4 min ·
[2603.04407] Semantic Containment as a Fundamental Property of Emergent Misalignment
Llms

[2603.04407] Semantic Containment as a Fundamental Property of Emergent Misalignment

Abstract page for arXiv paper 2603.04407: Semantic Containment as a Fundamental Property of Emergent Misalignment

arXiv - AI · 3 min ·
[2603.04405] Lost in Translation: How Language Re-Aligns Vision for Cross-Species Pathology
Llms

[2603.04405] Lost in Translation: How Language Re-Aligns Vision for Cross-Species Pathology

Abstract page for arXiv paper 2603.04405: Lost in Translation: How Language Re-Aligns Vision for Cross-Species Pathology

arXiv - Machine Learning · 4 min ·
[2603.05498] The Spike, the Sparse and the Sink: Anatomy of Massive Activations and Attention Sinks
Llms

[2603.05498] The Spike, the Sparse and the Sink: Anatomy of Massive Activations and Attention Sinks

Abstract page for arXiv paper 2603.05498: The Spike, the Sparse and the Sink: Anatomy of Massive Activations and Attention Sinks

arXiv - AI · 3 min ·
[2603.05485] Towards Provably Unbiased LLM Judges via Bias-Bounded Evaluation
Llms

[2603.05485] Towards Provably Unbiased LLM Judges via Bias-Bounded Evaluation

Abstract page for arXiv paper 2603.05485: Towards Provably Unbiased LLM Judges via Bias-Bounded Evaluation

arXiv - AI · 3 min ·
[2603.05399] Judge Reliability Harness: Stress Testing the Reliability of LLM Judges
Llms

[2603.05399] Judge Reliability Harness: Stress Testing the Reliability of LLM Judges

Abstract page for arXiv paper 2603.05399: Judge Reliability Harness: Stress Testing the Reliability of LLM Judges

arXiv - AI · 3 min ·
[2603.05392] Legal interpretation and AI: from expert systems to argumentation and LLMs
Llms

[2603.05392] Legal interpretation and AI: from expert systems to argumentation and LLMs

Abstract page for arXiv paper 2603.05392: Legal interpretation and AI: from expert systems to argumentation and LLMs

arXiv - AI · 3 min ·
[2603.05294] STRUCTUREDAGENT: Planning with AND/OR Trees for Long-Horizon Web Tasks
Llms

[2603.05294] STRUCTUREDAGENT: Planning with AND/OR Trees for Long-Horizon Web Tasks

Abstract page for arXiv paper 2603.05294: STRUCTUREDAGENT: Planning with AND/OR Trees for Long-Horizon Web Tasks

arXiv - AI · 3 min ·
[2603.05290] X-RAY: Mapping LLM Reasoning Capability via Formalized and Calibrated Probes
Llms

[2603.05290] X-RAY: Mapping LLM Reasoning Capability via Formalized and Calibrated Probes

Abstract page for arXiv paper 2603.05290: X-RAY: Mapping LLM Reasoning Capability via Formalized and Calibrated Probes

arXiv - AI · 4 min ·
[2603.05240] GCAgent: Enhancing Group Chat Communication through Dialogue Agents System
Llms

[2603.05240] GCAgent: Enhancing Group Chat Communication through Dialogue Agents System

Abstract page for arXiv paper 2603.05240: GCAgent: Enhancing Group Chat Communication through Dialogue Agents System

arXiv - AI · 3 min ·
[2603.05129] MedCoRAG: Interpretable Hepatology Diagnosis via Hybrid Evidence Retrieval and Multispecialty Consensus
Llms

[2603.05129] MedCoRAG: Interpretable Hepatology Diagnosis via Hybrid Evidence Retrieval and Multispecialty Consensus

Abstract page for arXiv paper 2603.05129: MedCoRAG: Interpretable Hepatology Diagnosis via Hybrid Evidence Retrieval and Multispecialty C...

arXiv - AI · 4 min ·
[2603.05120] Bidirectional Curriculum Generation: A Multi-Agent Framework for Data-Efficient Mathematical Reasoning
Llms

[2603.05120] Bidirectional Curriculum Generation: A Multi-Agent Framework for Data-Efficient Mathematical Reasoning

Abstract page for arXiv paper 2603.05120: Bidirectional Curriculum Generation: A Multi-Agent Framework for Data-Efficient Mathematical Re...

arXiv - AI · 3 min ·
[2603.05044] WebFactory: Automated Compression of Foundational Language Intelligence into Grounded Web Agents
Llms

[2603.05044] WebFactory: Automated Compression of Foundational Language Intelligence into Grounded Web Agents

Abstract page for arXiv paper 2603.05044: WebFactory: Automated Compression of Foundational Language Intelligence into Grounded Web Agents

arXiv - AI · 4 min ·
[2603.05040] Enhancing Zero-shot Commonsense Reasoning by Integrating Visual Knowledge via Machine Imagination
Llms

[2603.05040] Enhancing Zero-shot Commonsense Reasoning by Integrating Visual Knowledge via Machine Imagination

Abstract page for arXiv paper 2603.05040: Enhancing Zero-shot Commonsense Reasoning by Integrating Visual Knowledge via Machine Imagination

arXiv - AI · 3 min ·
[2603.05028] Survive at All Costs: Exploring LLM's Risky Behaviors under Survival Pressure
Llms

[2603.05028] Survive at All Costs: Exploring LLM's Risky Behaviors under Survival Pressure

Abstract page for arXiv paper 2603.05028: Survive at All Costs: Exploring LLM's Risky Behaviors under Survival Pressure

arXiv - AI · 4 min ·
[2603.05016] BioLLMAgent: A Hybrid Framework with Enhanced Structural Interpretability for Simulating Human Decision-Making in Computational Psychiatry
Llms

[2603.05016] BioLLMAgent: A Hybrid Framework with Enhanced Structural Interpretability for Simulating Human Decision-Making in Computational Psychiatry

Abstract page for arXiv paper 2603.05016: BioLLMAgent: A Hybrid Framework with Enhanced Structural Interpretability for Simulating Human ...

arXiv - AI · 3 min ·
Previous Page 242 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime