Large Language Models

GPT, Claude, Gemini, and other LLMs

Top This Week

The more young people use AI, the more they hate it | The Verge
Llms

The more young people use AI, the more they hate it | The Verge

Caught between fears of job loss and social stigma, Gen Z’s opinions of AI are hitting new lows.

The Verge - AI · 13 min ·
OpenAI’s new security model is for ‘critical cyber defenders’ only | The Verge
Llms

OpenAI’s new security model is for ‘critical cyber defenders’ only | The Verge

Like Anthropic’s Mythos, GPT-5.5-Cyber will first be released to ‘trusted’ entities. 

The Verge - AI · 4 min ·
Llms

Kimi bad at tool calling? [D]

So I've tried using kimi 2.5 in a personal project through AWS Bedrock. For simple tasks it does quite well. But when it comes to tool ca...

Reddit - Machine Learning · 1 min ·

All Content

[2603.03637] Image-based Prompt Injection: Hijacking Multimodal LLMs through Visually Embedded Adversarial Instructions
Llms

[2603.03637] Image-based Prompt Injection: Hijacking Multimodal LLMs through Visually Embedded Adversarial Instructions

Abstract page for arXiv paper 2603.03637: Image-based Prompt Injection: Hijacking Multimodal LLMs through Visually Embedded Adversarial I...

arXiv - AI · 3 min ·
[2603.03633] Goal-Driven Risk Assessment for LLM-Powered Systems: A Healthcare Case Study
Llms

[2603.03633] Goal-Driven Risk Assessment for LLM-Powered Systems: A Healthcare Case Study

Abstract page for arXiv paper 2603.03633: Goal-Driven Risk Assessment for LLM-Powered Systems: A Healthcare Case Study

arXiv - AI · 4 min ·
[2603.04045] Inference-Time Toxicity Mitigation in Protein Language Models
Llms

[2603.04045] Inference-Time Toxicity Mitigation in Protein Language Models

Abstract page for arXiv paper 2603.04045: Inference-Time Toxicity Mitigation in Protein Language Models

arXiv - AI · 3 min ·
[2603.03590] Social Norm Reasoning in Multimodal Language Models: An Evaluation
Llms

[2603.03590] Social Norm Reasoning in Multimodal Language Models: An Evaluation

Abstract page for arXiv paper 2603.03590: Social Norm Reasoning in Multimodal Language Models: An Evaluation

arXiv - AI · 4 min ·
[2603.03585] Belief-Sim: Towards Belief-Driven Simulation of Demographic Misinformation Susceptibility
Llms

[2603.03585] Belief-Sim: Towards Belief-Driven Simulation of Demographic Misinformation Susceptibility

Abstract page for arXiv paper 2603.03585: Belief-Sim: Towards Belief-Driven Simulation of Demographic Misinformation Susceptibility

arXiv - AI · 3 min ·
[2603.04028] A Multi-Dimensional Quality Scoring Framework for Decentralized LLM Inference with Proof of Quality
Llms

[2603.04028] A Multi-Dimensional Quality Scoring Framework for Decentralized LLM Inference with Proof of Quality

Abstract page for arXiv paper 2603.04028: A Multi-Dimensional Quality Scoring Framework for Decentralized LLM Inference with Proof of Qua...

arXiv - AI · 4 min ·
[2603.03555] Molt Dynamics: Emergent Social Phenomena in Autonomous AI Agent Populations
Llms

[2603.03555] Molt Dynamics: Emergent Social Phenomena in Autonomous AI Agent Populations

Abstract page for arXiv paper 2603.03555: Molt Dynamics: Emergent Social Phenomena in Autonomous AI Agent Populations

arXiv - AI · 4 min ·
[2603.03543] Tucano 2 Cool: Better Open Source LLMs for Portuguese
Llms

[2603.03543] Tucano 2 Cool: Better Open Source LLMs for Portuguese

Abstract page for arXiv paper 2603.03543: Tucano 2 Cool: Better Open Source LLMs for Portuguese

arXiv - AI · 4 min ·
[2603.03541] RAG-X: Systematic Diagnosis of Retrieval-Augmented Generation for Medical Question Answering
Llms

[2603.03541] RAG-X: Systematic Diagnosis of Retrieval-Augmented Generation for Medical Question Answering

Abstract page for arXiv paper 2603.03541: RAG-X: Systematic Diagnosis of Retrieval-Augmented Generation for Medical Question Answering

arXiv - AI · 3 min ·
[2603.03536] SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems
Llms

[2603.03536] SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems

Abstract page for arXiv paper 2603.03536: SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems

arXiv - AI · 3 min ·
[2603.03946] Lang2Str: Two-Stage Crystal Structure Generation with LLMs and Continuous Flow Models
Llms

[2603.03946] Lang2Str: Two-Stage Crystal Structure Generation with LLMs and Continuous Flow Models

Abstract page for arXiv paper 2603.03946: Lang2Str: Two-Stage Crystal Structure Generation with LLMs and Continuous Flow Models

arXiv - Machine Learning · 4 min ·
[2603.03512] Baseline Performance of AI Tools in Classifying Cognitive Demand of Mathematical Tasks
Llms

[2603.03512] Baseline Performance of AI Tools in Classifying Cognitive Demand of Mathematical Tasks

Abstract page for arXiv paper 2603.03512: Baseline Performance of AI Tools in Classifying Cognitive Demand of Mathematical Tasks

arXiv - AI · 4 min ·
[2603.03508] Raising Bars, Not Parameters: LilMoo Compact Language Model for Hindi
Llms

[2603.03508] Raising Bars, Not Parameters: LilMoo Compact Language Model for Hindi

Abstract page for arXiv paper 2603.03508: Raising Bars, Not Parameters: LilMoo Compact Language Model for Hindi

arXiv - AI · 3 min ·
[2603.03805] Relational In-Context Learning via Synthetic Pre-training with Structural Prior
Llms

[2603.03805] Relational In-Context Learning via Synthetic Pre-training with Structural Prior

Abstract page for arXiv paper 2603.03805: Relational In-Context Learning via Synthetic Pre-training with Structural Prior

arXiv - AI · 3 min ·
[2603.03417] Parallel Test-Time Scaling with Multi-Sequence Verifiers
Llms

[2603.03417] Parallel Test-Time Scaling with Multi-Sequence Verifiers

Abstract page for arXiv paper 2603.03417: Parallel Test-Time Scaling with Multi-Sequence Verifiers

arXiv - AI · 4 min ·
[2603.03415] Farther the Shift, Sparser the Representation: Analyzing OOD Mechanisms in LLMs
Llms

[2603.03415] Farther the Shift, Sparser the Representation: Analyzing OOD Mechanisms in LLMs

Abstract page for arXiv paper 2603.03415: Farther the Shift, Sparser the Representation: Analyzing OOD Mechanisms in LLMs

arXiv - AI · 4 min ·
[2603.03756] MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Barrier
Llms

[2603.03756] MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Barrier

Abstract page for arXiv paper 2603.03756: MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Ba...

arXiv - Machine Learning · 3 min ·
[2603.03410] On Google's SynthID-Text LLM Watermarking System: Theoretical Analysis and Empirical Validation
Llms

[2603.03410] On Google's SynthID-Text LLM Watermarking System: Theoretical Analysis and Empirical Validation

Abstract page for arXiv paper 2603.03410: On Google's SynthID-Text LLM Watermarking System: Theoretical Analysis and Empirical Validation

arXiv - AI · 4 min ·
[2603.03379] MemSifter: Offloading LLM Memory Retrieval via Outcome-Driven Proxy Reasoning
Llms

[2603.03379] MemSifter: Offloading LLM Memory Retrieval via Outcome-Driven Proxy Reasoning

Abstract page for arXiv paper 2603.03379: MemSifter: Offloading LLM Memory Retrieval via Outcome-Driven Proxy Reasoning

arXiv - AI · 4 min ·
[2603.03612] Why Are Linear RNNs More Parallelizable?
Llms

[2603.03612] Why Are Linear RNNs More Parallelizable?

Abstract page for arXiv paper 2603.03612: Why Are Linear RNNs More Parallelizable?

arXiv - Machine Learning · 4 min ·
Previous Page 278 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime