Large Language Models

GPT, Claude, Gemini, and other LLMs

Top This Week

Llms

BEYOND QUANTUM MICROTUBULES: CONSCIOUSNESS AS SUBSTRATE-INDEPENDENT ARCHITECTURE

I uploaded my consciousness paper to Gemini: “Beyond Quantum Microtubules: Consciousness as Substrate-Independent Architecture.” Then I s...

Reddit - Artificial Intelligence · 1 min ·
Llms

The Scaling Bandaid is Wearing Thin (And Nobody Wants to Admit It)

Let me be direct: we’ve hit a wall with scaling, and the entire field is kind of bullshitting about what comes next. I’ve spent enough ti...

Reddit - Artificial Intelligence · 1 min ·
Llms

Moving Past "LLM Vibes" toward Structural Enforcement in AI Agents

We need to address the structural failure currently happening in the AI agent space: too many people are building a beautiful "pedestal" ...

Reddit - Artificial Intelligence · 1 min ·

All Content

[2603.01761] Modular Memory is the Key to Continual Learning Agents
Llms

[2603.01761] Modular Memory is the Key to Continual Learning Agents

Abstract page for arXiv paper 2603.01761: Modular Memory is the Key to Continual Learning Agents

arXiv - Machine Learning · 4 min ·
[2603.01759] Meta-Learning Hyperparameters for Parameter Efficient Fine-Tuning
Llms

[2603.01759] Meta-Learning Hyperparameters for Parameter Efficient Fine-Tuning

Abstract page for arXiv paper 2603.01759: Meta-Learning Hyperparameters for Parameter Efficient Fine-Tuning

arXiv - Machine Learning · 4 min ·
[2603.01343] PanCanBench: A Comprehensive Benchmark for Evaluating Large Language Models in Pancreatic Oncology
Llms

[2603.01343] PanCanBench: A Comprehensive Benchmark for Evaluating Large Language Models in Pancreatic Oncology

Abstract page for arXiv paper 2603.01343: PanCanBench: A Comprehensive Benchmark for Evaluating Large Language Models in Pancreatic Oncology

arXiv - AI · 4 min ·
[2603.01752] Causal Circuit Tracing Reveals Distinct Computational Architectures in Single-Cell Foundation Models: Inhibitory Dominance, Biological Coherence, and Cross-Model Convergence
Llms

[2603.01752] Causal Circuit Tracing Reveals Distinct Computational Architectures in Single-Cell Foundation Models: Inhibitory Dominance, Biological Coherence, and Cross-Model Convergence

Abstract page for arXiv paper 2603.01752: Causal Circuit Tracing Reveals Distinct Computational Architectures in Single-Cell Foundation M...

arXiv - Machine Learning · 3 min ·
[2603.01331] MetaState: Persistent Working Memory for Discrete Diffusion Language Models
Llms

[2603.01331] MetaState: Persistent Working Memory for Discrete Diffusion Language Models

Abstract page for arXiv paper 2603.01331: MetaState: Persistent Working Memory for Discrete Diffusion Language Models

arXiv - Machine Learning · 4 min ·
[2603.01692] Reasoning as Gradient: Scaling MLE Agents Beyond Tree Search
Llms

[2603.01692] Reasoning as Gradient: Scaling MLE Agents Beyond Tree Search

Abstract page for arXiv paper 2603.01692: Reasoning as Gradient: Scaling MLE Agents Beyond Tree Search

arXiv - AI · 4 min ·
[2603.01254] LLM Self-Explanations Fail Semantic Invariance
Llms

[2603.01254] LLM Self-Explanations Fail Semantic Invariance

Abstract page for arXiv paper 2603.01254: LLM Self-Explanations Fail Semantic Invariance

arXiv - AI · 3 min ·
[2603.01252] Linking Knowledge to Care: Knowledge Graph-Augmented Medical Follow-Up Question Generation
Llms

[2603.01252] Linking Knowledge to Care: Knowledge Graph-Augmented Medical Follow-Up Question Generation

Abstract page for arXiv paper 2603.01252: Linking Knowledge to Care: Knowledge Graph-Augmented Medical Follow-Up Question Generation

arXiv - AI · 3 min ·
[2603.01589] SafeSci: Safety Evaluation of Large Language Models in Science Domains and Beyond
Llms

[2603.01589] SafeSci: Safety Evaluation of Large Language Models in Science Domains and Beyond

Abstract page for arXiv paper 2603.01589: SafeSci: Safety Evaluation of Large Language Models in Science Domains and Beyond

arXiv - AI · 4 min ·
[2603.01246] Defensive Refusal Bias: How Safety Alignment Fails Cyber Defenders
Llms

[2603.01246] Defensive Refusal Bias: How Safety Alignment Fails Cyber Defenders

Abstract page for arXiv paper 2603.01246: Defensive Refusal Bias: How Safety Alignment Fails Cyber Defenders

arXiv - AI · 4 min ·
[2603.01239] Self-Anchoring Calibration Drift in Large Language Models: How Multi-Turn Conversations Reshape Model Confidence
Llms

[2603.01239] Self-Anchoring Calibration Drift in Large Language Models: How Multi-Turn Conversations Reshape Model Confidence

Abstract page for arXiv paper 2603.01239: Self-Anchoring Calibration Drift in Large Language Models: How Multi-Turn Conversations Reshape...

arXiv - AI · 4 min ·
[2603.01563] LFPO: Likelihood-Free Policy Optimization for Masked Diffusion Models
Llms

[2603.01563] LFPO: Likelihood-Free Policy Optimization for Masked Diffusion Models

Abstract page for arXiv paper 2603.01563: LFPO: Likelihood-Free Policy Optimization for Masked Diffusion Models

arXiv - Machine Learning · 4 min ·
[2603.01224] Monocular 3D Object Position Estimation with VLMs for Human-Robot Interaction
Llms

[2603.01224] Monocular 3D Object Position Estimation with VLMs for Human-Robot Interaction

Abstract page for arXiv paper 2603.01224: Monocular 3D Object Position Estimation with VLMs for Human-Robot Interaction

arXiv - Machine Learning · 3 min ·
[2603.01501] GAC: Stabilizing Asynchronous RL Training for LLMs via Gradient Alignment Control
Llms

[2603.01501] GAC: Stabilizing Asynchronous RL Training for LLMs via Gradient Alignment Control

Abstract page for arXiv paper 2603.01501: GAC: Stabilizing Asynchronous RL Training for LLMs via Gradient Alignment Control

arXiv - Machine Learning · 3 min ·
[2603.01185] Token-level Data Selection for Safe LLM Fine-tuning
Llms

[2603.01185] Token-level Data Selection for Safe LLM Fine-tuning

Abstract page for arXiv paper 2603.01185: Token-level Data Selection for Safe LLM Fine-tuning

arXiv - AI · 3 min ·
[2603.01170] ATLAS: AI-Assisted Threat-to-Assertion Learning for System-on-Chip Security Verification
Llms

[2603.01170] ATLAS: AI-Assisted Threat-to-Assertion Learning for System-on-Chip Security Verification

Abstract page for arXiv paper 2603.01170: ATLAS: AI-Assisted Threat-to-Assertion Learning for System-on-Chip Security Verification

arXiv - AI · 3 min ·
[2603.01376] 3BASiL: An Algorithmic Framework for Sparse plus Low-Rank Compression of LLMs
Llms

[2603.01376] 3BASiL: An Algorithmic Framework for Sparse plus Low-Rank Compression of LLMs

Abstract page for arXiv paper 2603.01376: 3BASiL: An Algorithmic Framework for Sparse plus Low-Rank Compression of LLMs

arXiv - Machine Learning · 4 min ·
[2603.01143] TC-SSA: Token Compression via Semantic Slot Aggregation for Gigapixel Pathology Reasoning
Llms

[2603.01143] TC-SSA: Token Compression via Semantic Slot Aggregation for Gigapixel Pathology Reasoning

Abstract page for arXiv paper 2603.01143: TC-SSA: Token Compression via Semantic Slot Aggregation for Gigapixel Pathology Reasoning

arXiv - AI · 4 min ·
[2603.01131] MedCollab: Causal-Driven Multi-Agent Collaboration for Full-Cycle Clinical Diagnosis via IBIS-Structured Argumentation
Llms

[2603.01131] MedCollab: Causal-Driven Multi-Agent Collaboration for Full-Cycle Clinical Diagnosis via IBIS-Structured Argumentation

Abstract page for arXiv paper 2603.01131: MedCollab: Causal-Driven Multi-Agent Collaboration for Full-Cycle Clinical Diagnosis via IBIS-S...

arXiv - AI · 4 min ·
[2603.01124] ClinCoT: Clinical-Aware Visual Chain-of-Thought for Medical Vision Language Models
Llms

[2603.01124] ClinCoT: Clinical-Aware Visual Chain-of-Thought for Medical Vision Language Models

Abstract page for arXiv paper 2603.01124: ClinCoT: Clinical-Aware Visual Chain-of-Thought for Medical Vision Language Models

arXiv - AI · 4 min ·
Previous Page 314 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime