Large Language Models

GPT, Claude, Gemini, and other LLMs

Top This Week

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED
Llms

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED

The AI lab's Project Glasswing will bring together Apple, Google, and more than 45 other organizations. They'll use the new Claude Mythos...

Wired - AI · 7 min ·
Llms

The public needs to control AI-run infrastructure, labor, education, and governance— NOT private actors

A lot of discussion around AI is becoming siloed, and I think that is dangerous. People in AI-focused spaces often talk as if the only qu...

Reddit - Artificial Intelligence · 1 min ·
Llms

Agents that write their own code at runtime and vote on capabilities, no human in the loop

hollowOS just hit v4.4 and I added something that I haven’t seen anyone else do. Previous versions gave you an OS for agents: structured ...

Reddit - Artificial Intelligence · 1 min ·

All Content

[2504.04372] Assessing the Impact of Code Changes on the Fault Localizability of Large Language Models
Llms

[2504.04372] Assessing the Impact of Code Changes on the Fault Localizability of Large Language Models

Abstract page for arXiv paper 2504.04372: Assessing the Impact of Code Changes on the Fault Localizability of Large Language Models

arXiv - Machine Learning · 4 min ·
[2602.00485] Replacing Parameters with Preferences: Federated Alignment of Heterogeneous Vision-Language Models
Llms

[2602.00485] Replacing Parameters with Preferences: Federated Alignment of Heterogeneous Vision-Language Models

Abstract page for arXiv paper 2602.00485: Replacing Parameters with Preferences: Federated Alignment of Heterogeneous Vision-Language Models

arXiv - AI · 4 min ·
[2601.03604] Interleaved Tool-Call Reasoning for Protein Function Understanding
Llms

[2601.03604] Interleaved Tool-Call Reasoning for Protein Function Understanding

Abstract page for arXiv paper 2601.03604: Interleaved Tool-Call Reasoning for Protein Function Understanding

arXiv - AI · 3 min ·
[2512.10534] Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning
Llms

[2512.10534] Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning

Abstract page for arXiv paper 2512.10534: Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforceme...

arXiv - AI · 4 min ·
[2601.22571] PerfGuard: A Performance-Aware Agent for Visual Content Generation
Llms

[2601.22571] PerfGuard: A Performance-Aware Agent for Visual Content Generation

Abstract page for arXiv paper 2601.22571: PerfGuard: A Performance-Aware Agent for Visual Content Generation

arXiv - AI · 4 min ·
[2512.14106] HydroGEM: A Self Supervised Zero Shot Hybrid TCN Transformer Foundation Model for Continental Scale Streamflow Quality Control
Llms

[2512.14106] HydroGEM: A Self Supervised Zero Shot Hybrid TCN Transformer Foundation Model for Continental Scale Streamflow Quality Control

Abstract page for arXiv paper 2512.14106: HydroGEM: A Self Supervised Zero Shot Hybrid TCN Transformer Foundation Model for Continental S...

arXiv - AI · 4 min ·
[2512.07081] ClinNoteAgents: An LLM Multi-Agent System for Predicting and Interpreting Heart Failure 30-Day Readmission from Clinical Notes
Llms

[2512.07081] ClinNoteAgents: An LLM Multi-Agent System for Predicting and Interpreting Heart Failure 30-Day Readmission from Clinical Notes

Abstract page for arXiv paper 2512.07081: ClinNoteAgents: An LLM Multi-Agent System for Predicting and Interpreting Heart Failure 30-Day ...

arXiv - AI · 4 min ·
[2505.13770] Ice Cream Doesn't Cause Drowning: Benchmarking LLMs Against Statistical Pitfalls in Causal Inference
Llms

[2505.13770] Ice Cream Doesn't Cause Drowning: Benchmarking LLMs Against Statistical Pitfalls in Causal Inference

Abstract page for arXiv paper 2505.13770: Ice Cream Doesn't Cause Drowning: Benchmarking LLMs Against Statistical Pitfalls in Causal Infe...

arXiv - Machine Learning · 4 min ·
[2511.21033] Towards Trustworthy Legal AI through LLM Agents and Formal Reasoning
Llms

[2511.21033] Towards Trustworthy Legal AI through LLM Agents and Formal Reasoning

Abstract page for arXiv paper 2511.21033: Towards Trustworthy Legal AI through LLM Agents and Formal Reasoning

arXiv - AI · 4 min ·
[2511.04439] CoRPO: Adding a Correctness Bias to GRPO Improves Generalization
Llms

[2511.04439] CoRPO: Adding a Correctness Bias to GRPO Improves Generalization

Abstract page for arXiv paper 2511.04439: CoRPO: Adding a Correctness Bias to GRPO Improves Generalization

arXiv - Machine Learning · 4 min ·
[2510.08966] Beyond Prefixes: Graph-as-Memory Cross-Attention for Knowledge Graph Completion with Large Language Models
Llms

[2510.08966] Beyond Prefixes: Graph-as-Memory Cross-Attention for Knowledge Graph Completion with Large Language Models

Abstract page for arXiv paper 2510.08966: Beyond Prefixes: Graph-as-Memory Cross-Attention for Knowledge Graph Completion with Large Lang...

arXiv - AI · 4 min ·
[2505.04997] Foam-Agent: Towards Automated Intelligent CFD Workflows
Llms

[2505.04997] Foam-Agent: Towards Automated Intelligent CFD Workflows

Abstract page for arXiv paper 2505.04997: Foam-Agent: Towards Automated Intelligent CFD Workflows

arXiv - AI · 3 min ·
[2503.07928] The StudyChat Dataset: Analyzing Student Dialogues With ChatGPT in an Artificial Intelligence Course
Llms

[2503.07928] The StudyChat Dataset: Analyzing Student Dialogues With ChatGPT in an Artificial Intelligence Course

Abstract page for arXiv paper 2503.07928: The StudyChat Dataset: Analyzing Student Dialogues With ChatGPT in an Artificial Intelligence C...

arXiv - AI · 4 min ·
[2603.05500] POET-X: Memory-efficient LLM Training by Scaling Orthogonal Transformation
Llms

[2603.05500] POET-X: Memory-efficient LLM Training by Scaling Orthogonal Transformation

Abstract page for arXiv paper 2603.05500: POET-X: Memory-efficient LLM Training by Scaling Orthogonal Transformation

arXiv - Machine Learning · 3 min ·
[2603.05494] Censored LLMs as a Natural Testbed for Secret Knowledge Elicitation
Llms

[2603.05494] Censored LLMs as a Natural Testbed for Secret Knowledge Elicitation

Abstract page for arXiv paper 2603.05494: Censored LLMs as a Natural Testbed for Secret Knowledge Elicitation

arXiv - Machine Learning · 4 min ·
[2603.05488] Reasoning Theater: Disentangling Model Beliefs from Chain-of-Thought
Llms

[2603.05488] Reasoning Theater: Disentangling Model Beliefs from Chain-of-Thought

Abstract page for arXiv paper 2603.05488: Reasoning Theater: Disentangling Model Beliefs from Chain-of-Thought

arXiv - Machine Learning · 3 min ·
[2603.05471] Leveraging LLM Parametric Knowledge for Fact Checking without Retrieval
Llms

[2603.05471] Leveraging LLM Parametric Knowledge for Fact Checking without Retrieval

Abstract page for arXiv paper 2603.05471: Leveraging LLM Parametric Knowledge for Fact Checking without Retrieval

arXiv - AI · 4 min ·
[2603.05432] Ensembling Language Models with Sequential Monte Carlo
Llms

[2603.05432] Ensembling Language Models with Sequential Monte Carlo

Abstract page for arXiv paper 2603.05432: Ensembling Language Models with Sequential Monte Carlo

arXiv - Machine Learning · 4 min ·
[2603.05421] MobileFetalCLIP: Selective Repulsive Knowledge Distillation for Mobile Fetal Ultrasound Analysis
Llms

[2603.05421] MobileFetalCLIP: Selective Repulsive Knowledge Distillation for Mobile Fetal Ultrasound Analysis

Abstract page for arXiv paper 2603.05421: MobileFetalCLIP: Selective Repulsive Knowledge Distillation for Mobile Fetal Ultrasound Analysis

arXiv - Machine Learning · 3 min ·
[2603.05308] Med-V1: Small Language Models for Zero-shot and Scalable Biomedical Evidence Attribution
Llms

[2603.05308] Med-V1: Small Language Models for Zero-shot and Scalable Biomedical Evidence Attribution

Abstract page for arXiv paper 2603.05308: Med-V1: Small Language Models for Zero-shot and Scalable Biomedical Evidence Attribution

arXiv - AI · 4 min ·
Previous Page 108 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime