Large Language Models

GPT, Claude, Gemini, and other LLMs

Top This Week

ChatGPT Images 2.0 is a hit in India, but not a big winner elsewhere, yet | TechCrunch
Llms

ChatGPT Images 2.0 is a hit in India, but not a big winner elsewhere, yet | TechCrunch

Users in India are embracing ChatGPT Images 2.0 for creative, personal visuals — from avatars to cinematic portraits.

TechCrunch - AI · 5 min ·
Cal Poly's ChatGPT: A Resource or a Dependency?
Llms

Cal Poly's ChatGPT: A Resource or a Dependency?

In February 2025, the Cal State system signed a $17 million contract with OpenAI that granted Cal State students, faculty and staff free ...

AI Tools & Products · 6 min ·
AI Update: Listen All of Y'all It's a Sabotage - What Is Claude 4.6, and Should We Be Concerned?
Llms

AI Update: Listen All of Y'all It's a Sabotage - What Is Claude 4.6, and Should We Be Concerned?

The latest release of Anthropic’s enterprise-grade generative AI - Claude 4.6 – is the newest iteration of the Claude model family that p...

AI Tools & Products · 4 min ·

All Content

[2603.03000] Why Does RLAIF Work At All?
Llms

[2603.03000] Why Does RLAIF Work At All?

Abstract page for arXiv paper 2603.03000: Why Does RLAIF Work At All?

arXiv - AI · 3 min ·
[2603.02266] When Scaling Fails: Mitigating Audio Perception Decay of LALMs via Multi-Step Perception-Aware Reasoning
Llms

[2603.02266] When Scaling Fails: Mitigating Audio Perception Decay of LALMs via Multi-Step Perception-Aware Reasoning

Abstract page for arXiv paper 2603.02266: When Scaling Fails: Mitigating Audio Perception Decay of LALMs via Multi-Step Perception-Aware ...

arXiv - AI · 4 min ·
[2603.02262] Silent Sabotage During Fine-Tuning: Few-Shot Rationale Poisoning of Compact Medical LLMs
Llms

[2603.02262] Silent Sabotage During Fine-Tuning: Few-Shot Rationale Poisoning of Compact Medical LLMs

Abstract page for arXiv paper 2603.02262: Silent Sabotage During Fine-Tuning: Few-Shot Rationale Poisoning of Compact Medical LLMs

arXiv - Machine Learning · 3 min ·
[2603.02951] CGL: Advancing Continual GUI Learning via Reinforcement Fine-Tuning
Llms

[2603.02951] CGL: Advancing Continual GUI Learning via Reinforcement Fine-Tuning

Abstract page for arXiv paper 2603.02951: CGL: Advancing Continual GUI Learning via Reinforcement Fine-Tuning

arXiv - Machine Learning · 4 min ·
[2603.02938] Beyond One-Size-Fits-All: Adaptive Subgraph Denoising for Zero-Shot Graph Learning with Large Language Models
Llms

[2603.02938] Beyond One-Size-Fits-All: Adaptive Subgraph Denoising for Zero-Shot Graph Learning with Large Language Models

Abstract page for arXiv paper 2603.02938: Beyond One-Size-Fits-All: Adaptive Subgraph Denoising for Zero-Shot Graph Learning with Large L...

arXiv - AI · 4 min ·
[2603.02913] Eliciting Numerical Predictive Distributions of LLMs Without Autoregression
Llms

[2603.02913] Eliciting Numerical Predictive Distributions of LLMs Without Autoregression

Abstract page for arXiv paper 2603.02913: Eliciting Numerical Predictive Distributions of LLMs Without Autoregression

arXiv - AI · 3 min ·
[2603.02840] Adapting Time Series Foundation Models through Data Mixtures
Llms

[2603.02840] Adapting Time Series Foundation Models through Data Mixtures

Abstract page for arXiv paper 2603.02840: Adapting Time Series Foundation Models through Data Mixtures

arXiv - Machine Learning · 4 min ·
[2603.02792] From Heuristic Selection to Automated Algorithm Design: LLMs Benefit from Strong Priors
Llms

[2603.02792] From Heuristic Selection to Automated Algorithm Design: LLMs Benefit from Strong Priors

Abstract page for arXiv paper 2603.02792: From Heuristic Selection to Automated Algorithm Design: LLMs Benefit from Strong Priors

arXiv - Machine Learning · 3 min ·
[2603.02675] From Shallow to Deep: Pinning Semantic Intent via Causal GRPO
Llms

[2603.02675] From Shallow to Deep: Pinning Semantic Intent via Causal GRPO

Abstract page for arXiv paper 2603.02675: From Shallow to Deep: Pinning Semantic Intent via Causal GRPO

arXiv - Machine Learning · 3 min ·
[2504.21023] Param$Δ$ for Direct Weight Mixing: Post-Train Large Language Model at Zero Cost
Llms

[2504.21023] Param$Δ$ for Direct Weight Mixing: Post-Train Large Language Model at Zero Cost

Abstract page for arXiv paper 2504.21023: Param$Δ$ for Direct Weight Mixing: Post-Train Large Language Model at Zero Cost

arXiv - AI · 4 min ·
[2603.03258] Inherited Goal Drift: Contextual Pressure Can Undermine Agentic Goals
Llms

[2603.03258] Inherited Goal Drift: Contextual Pressure Can Undermine Agentic Goals

Abstract page for arXiv paper 2603.03258: Inherited Goal Drift: Contextual Pressure Can Undermine Agentic Goals

arXiv - AI · 4 min ·
[2603.02635] SaFeR-ToolKit: Structured Reasoning via Virtual Tool Calling for Multimodal Safety
Llms

[2603.02635] SaFeR-ToolKit: Structured Reasoning via Virtual Tool Calling for Multimodal Safety

Abstract page for arXiv paper 2603.02635: SaFeR-ToolKit: Structured Reasoning via Virtual Tool Calling for Multimodal Safety

arXiv - Machine Learning · 4 min ·
[2603.03242] Density-Guided Response Optimization: Community-Grounded Alignment via Implicit Acceptance Signals
Llms

[2603.03242] Density-Guided Response Optimization: Community-Grounded Alignment via Implicit Acceptance Signals

Abstract page for arXiv paper 2603.03242: Density-Guided Response Optimization: Community-Grounded Alignment via Implicit Acceptance Signals

arXiv - AI · 4 min ·
[2603.02630] MASPOB: Bandit-Based Prompt Optimization for Multi-Agent Systems with Graph Neural Networks
Llms

[2603.02630] MASPOB: Bandit-Based Prompt Optimization for Multi-Agent Systems with Graph Neural Networks

Abstract page for arXiv paper 2603.02630: MASPOB: Bandit-Based Prompt Optimization for Multi-Agent Systems with Graph Neural Networks

arXiv - AI · 4 min ·
[2603.03233] AI-for-Science Low-code Platform with Bayesian Adversarial Multi-Agent Framework
Llms

[2603.03233] AI-for-Science Low-code Platform with Bayesian Adversarial Multi-Agent Framework

Abstract page for arXiv paper 2603.03233: AI-for-Science Low-code Platform with Bayesian Adversarial Multi-Agent Framework

arXiv - AI · 4 min ·
[2603.03203] No Memorization, No Detection: Output Distribution-Based Contamination Detection in Small Language Models
Llms

[2603.03203] No Memorization, No Detection: Output Distribution-Based Contamination Detection in Small Language Models

Abstract page for arXiv paper 2603.03203: No Memorization, No Detection: Output Distribution-Based Contamination Detection in Small Langu...

arXiv - AI · 3 min ·
[2603.02604] Heterogeneous Agent Collaborative Reinforcement Learning
Llms

[2603.02604] Heterogeneous Agent Collaborative Reinforcement Learning

Abstract page for arXiv paper 2603.02604: Heterogeneous Agent Collaborative Reinforcement Learning

arXiv - Machine Learning · 3 min ·
[2603.03175] Saarthi for AGI: Towards Domain-Specific General Intelligence for Formal Verification
Llms

[2603.03175] Saarthi for AGI: Towards Domain-Specific General Intelligence for Formal Verification

Abstract page for arXiv paper 2603.03175: Saarthi for AGI: Towards Domain-Specific General Intelligence for Formal Verification

arXiv - AI · 4 min ·
[2603.03147] Agentic AI-based Coverage Closure for Formal Verification
Llms

[2603.03147] Agentic AI-based Coverage Closure for Formal Verification

Abstract page for arXiv paper 2603.03147: Agentic AI-based Coverage Closure for Formal Verification

arXiv - AI · 3 min ·
[2603.03080] Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation
Llms

[2603.03080] Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Recommendation

Abstract page for arXiv paper 2603.03080: Beyond Factual Correctness: Mitigating Preference-Inconsistent Explanations in Explainable Reco...

arXiv - AI · 3 min ·
Previous Page 288 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime