Large Language Models

GPT, Claude, Gemini, and other LLMs

Top This Week

Llms

BEYOND QUANTUM MICROTUBULES: CONSCIOUSNESS AS SUBSTRATE-INDEPENDENT ARCHITECTURE

I uploaded my consciousness paper to Gemini: “Beyond Quantum Microtubules: Consciousness as Substrate-Independent Architecture.” Then I s...

Reddit - Artificial Intelligence · 1 min ·
Llms

The Scaling Bandaid is Wearing Thin (And Nobody Wants to Admit It)

Let me be direct: we’ve hit a wall with scaling, and the entire field is kind of bullshitting about what comes next. I’ve spent enough ti...

Reddit - Artificial Intelligence · 1 min ·
Llms

Moving Past "LLM Vibes" toward Structural Enforcement in AI Agents

We need to address the structural failure currently happening in the AI agent space: too many people are building a beautiful "pedestal" ...

Reddit - Artificial Intelligence · 1 min ·

All Content

[2603.00541] Spectral Condition for $μ$P under Width-Depth Scaling
Llms

[2603.00541] Spectral Condition for $μ$P under Width-Depth Scaling

Abstract page for arXiv paper 2603.00541: Spectral Condition for $μ$P under Width-Depth Scaling

arXiv - Machine Learning · 4 min ·
[2603.00510] What Do Visual Tokens Really Encode? Uncovering Sparsity and Redundancy in Multimodal Large Language Models
Llms

[2603.00510] What Do Visual Tokens Really Encode? Uncovering Sparsity and Redundancy in Multimodal Large Language Models

Abstract page for arXiv paper 2603.00510: What Do Visual Tokens Really Encode? Uncovering Sparsity and Redundancy in Multimodal Large Lan...

arXiv - AI · 4 min ·
[2603.00501] WirelessAgent++: Automated Agentic Workflow Design and Benchmarking for Wireless Networks
Llms

[2603.00501] WirelessAgent++: Automated Agentic Workflow Design and Benchmarking for Wireless Networks

Abstract page for arXiv paper 2603.00501: WirelessAgent++: Automated Agentic Workflow Design and Benchmarking for Wireless Networks

arXiv - AI · 4 min ·
[2603.00498] Antibody: Strengthening Defense Against Harmful Fine-Tuning for Large Language Models via Attenuating Harmful Gradient Influence
Llms

[2603.00498] Antibody: Strengthening Defense Against Harmful Fine-Tuning for Large Language Models via Attenuating Harmful Gradient Influence

Abstract page for arXiv paper 2603.00498: Antibody: Strengthening Defense Against Harmful Fine-Tuning for Large Language Models via Atten...

arXiv - Machine Learning · 4 min ·
[2603.00476] Atomicity for Agents: Exposing, Exploiting, and Mitigating TOCTOU Vulnerabilities in Browser-Use Agents
Llms

[2603.00476] Atomicity for Agents: Exposing, Exploiting, and Mitigating TOCTOU Vulnerabilities in Browser-Use Agents

Abstract page for arXiv paper 2603.00476: Atomicity for Agents: Exposing, Exploiting, and Mitigating TOCTOU Vulnerabilities in Browser-Us...

arXiv - AI · 3 min ·
[2603.00462] OPGAgent: An Agent for Auditable Dental Panoramic X-ray Interpretation
Llms

[2603.00462] OPGAgent: An Agent for Auditable Dental Panoramic X-ray Interpretation

Abstract page for arXiv paper 2603.00462: OPGAgent: An Agent for Auditable Dental Panoramic X-ray Interpretation

arXiv - AI · 4 min ·
[2603.00452] Texterial: A Text-as-Material Interaction Paradigm for LLM-Mediated Writing
Llms

[2603.00452] Texterial: A Text-as-Material Interaction Paradigm for LLM-Mediated Writing

Abstract page for arXiv paper 2603.00452: Texterial: A Text-as-Material Interaction Paradigm for LLM-Mediated Writing

arXiv - AI · 3 min ·
[2603.00454] Rooted Absorbed Prefix Trajectory Balance with Submodular Replay for GFlowNet Training
Llms

[2603.00454] Rooted Absorbed Prefix Trajectory Balance with Submodular Replay for GFlowNet Training

Abstract page for arXiv paper 2603.00454: Rooted Absorbed Prefix Trajectory Balance with Submodular Replay for GFlowNet Training

arXiv - Machine Learning · 3 min ·
[2603.00433] TAP-SLF: Parameter-Efficient Adaptation of Vision Foundation Models for Multi-Task Ultrasound Image Analysis
Llms

[2603.00433] TAP-SLF: Parameter-Efficient Adaptation of Vision Foundation Models for Multi-Task Ultrasound Image Analysis

Abstract page for arXiv paper 2603.00433: TAP-SLF: Parameter-Efficient Adaptation of Vision Foundation Models for Multi-Task Ultrasound I...

arXiv - AI · 4 min ·
[2603.00429] Personalities at Play: Probing Alignment in AI Teammates
Llms

[2603.00429] Personalities at Play: Probing Alignment in AI Teammates

Abstract page for arXiv paper 2603.00429: Personalities at Play: Probing Alignment in AI Teammates

arXiv - AI · 4 min ·
[2603.00381] Verifier-Bound Communication for LLM Agents: Certified Bounds on Covert Signaling
Llms

[2603.00381] Verifier-Bound Communication for LLM Agents: Certified Bounds on Covert Signaling

Abstract page for arXiv paper 2603.00381: Verifier-Bound Communication for LLM Agents: Certified Bounds on Covert Signaling

arXiv - AI · 4 min ·
[2603.00355] StethoLM: Audio Language Model for Cardiopulmonary Analysis Across Clinical Tasks
Llms

[2603.00355] StethoLM: Audio Language Model for Cardiopulmonary Analysis Across Clinical Tasks

Abstract page for arXiv paper 2603.00355: StethoLM: Audio Language Model for Cardiopulmonary Analysis Across Clinical Tasks

arXiv - Machine Learning · 4 min ·
[2603.00314] When Metrics Disagree: Automatic Similarity vs. LLM-as-a-Judge for Clinical Dialogue Evaluation
Llms

[2603.00314] When Metrics Disagree: Automatic Similarity vs. LLM-as-a-Judge for Clinical Dialogue Evaluation

Abstract page for arXiv paper 2603.00314: When Metrics Disagree: Automatic Similarity vs. LLM-as-a-Judge for Clinical Dialogue Evaluation

arXiv - AI · 4 min ·
[2603.00270] Transformers Remember First, Forget Last: Dual-Process Interference in LLMs
Llms

[2603.00270] Transformers Remember First, Forget Last: Dual-Process Interference in LLMs

Abstract page for arXiv paper 2603.00270: Transformers Remember First, Forget Last: Dual-Process Interference in LLMs

arXiv - AI · 3 min ·
[2603.00253] CoPeP: Benchmarking Continual Pretraining for Protein Language Models
Llms

[2603.00253] CoPeP: Benchmarking Continual Pretraining for Protein Language Models

Abstract page for arXiv paper 2603.00253: CoPeP: Benchmarking Continual Pretraining for Protein Language Models

arXiv - Machine Learning · 4 min ·
[2603.00221] A medical coding language model trained on clinical narratives from a population-wide cohort of 1.8 million patients
Llms

[2603.00221] A medical coding language model trained on clinical narratives from a population-wide cohort of 1.8 million patients

Abstract page for arXiv paper 2603.00221: A medical coding language model trained on clinical narratives from a population-wide cohort of...

arXiv - Machine Learning · 4 min ·
[2603.00214] Agentic Scientific Simulation: Execution-Grounded Model Construction and Reconstruction
Llms

[2603.00214] Agentic Scientific Simulation: Execution-Grounded Model Construction and Reconstruction

Abstract page for arXiv paper 2603.00214: Agentic Scientific Simulation: Execution-Grounded Model Construction and Reconstruction

arXiv - AI · 4 min ·
[2603.00206] TACIT Benchmark: A Programmatic Visual Reasoning Benchmark for Generative and Discriminative Models
Llms

[2603.00206] TACIT Benchmark: A Programmatic Visual Reasoning Benchmark for Generative and Discriminative Models

Abstract page for arXiv paper 2603.00206: TACIT Benchmark: A Programmatic Visual Reasoning Benchmark for Generative and Discriminative Mo...

arXiv - AI · 3 min ·
[2603.00105] LIDS: LLM Summary Inference Under the Layered Lens
Llms

[2603.00105] LIDS: LLM Summary Inference Under the Layered Lens

Abstract page for arXiv paper 2603.00105: LIDS: LLM Summary Inference Under the Layered Lens

arXiv - Machine Learning · 4 min ·
[2603.00198] Stateful Token Reduction for Long-Video Hybrid VLMs
Llms

[2603.00198] Stateful Token Reduction for Long-Video Hybrid VLMs

Abstract page for arXiv paper 2603.00198: Stateful Token Reduction for Long-Video Hybrid VLMs

arXiv - AI · 3 min ·
Previous Page 317 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime