Large Language Models

GPT, Claude, Gemini, and other LLMs

Top This Week

Llms

[R] Reference model free behavioral discovery of AudiBench model organisms via Probe-Mediated Adaptive Auditing

Anthropic's AuditBench - 56 Llama 3.3 70B models with planted hidden behaviors - their best agent detects the behaviros 10-13% of the tim...

Reddit - Machine Learning · 1 min ·
Llms

[P] Dante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.

The problem If you work with Italian text and local models, you know the pain. Every open-source LLM out there treats Italian as an after...

Reddit - Machine Learning · 1 min ·
Llms

I have been coding for 11 years and I caught myself completely unable to debug a problem without AI assistance last month. That scared me more than anything I have seen in this industry.

I want to be honest about something that happened to me because I think it is more common than people admit. Last month I hit a bug in a ...

Reddit - Artificial Intelligence · 1 min ·

All Content

[2512.21039] Agentic Multi-Persona Framework for Evidence-Aware Fake News Detection
Llms

[2512.21039] Agentic Multi-Persona Framework for Evidence-Aware Fake News Detection

Abstract page for arXiv paper 2512.21039: Agentic Multi-Persona Framework for Evidence-Aware Fake News Detection

arXiv - Machine Learning · 3 min ·
[2510.02282] VidGuard-R1: AI-Generated Video Detection and Explanation via Reasoning MLLMs and RL
Llms

[2510.02282] VidGuard-R1: AI-Generated Video Detection and Explanation via Reasoning MLLMs and RL

Abstract page for arXiv paper 2510.02282: VidGuard-R1: AI-Generated Video Detection and Explanation via Reasoning MLLMs and RL

arXiv - Machine Learning · 4 min ·
[2508.18088] How Quantization Shapes Bias in Large Language Models
Llms

[2508.18088] How Quantization Shapes Bias in Large Language Models

Abstract page for arXiv paper 2508.18088: How Quantization Shapes Bias in Large Language Models

arXiv - Machine Learning · 3 min ·
[2508.11847] Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings
Llms

[2508.11847] Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings

Abstract page for arXiv paper 2508.11847: Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings

arXiv - Machine Learning · 4 min ·
[2506.08762] EDINET-Bench: Evaluating LLMs on Complex Financial Tasks using Japanese Financial Statements
Llms

[2506.08762] EDINET-Bench: Evaluating LLMs on Complex Financial Tasks using Japanese Financial Statements

Abstract page for arXiv paper 2506.08762: EDINET-Bench: Evaluating LLMs on Complex Financial Tasks using Japanese Financial Statements

arXiv - Machine Learning · 4 min ·
[2601.18734] Self-Distilled Reasoner: On-Policy Self-Distillation for Large Language Models
Llms

[2601.18734] Self-Distilled Reasoner: On-Policy Self-Distillation for Large Language Models

Abstract page for arXiv paper 2601.18734: Self-Distilled Reasoner: On-Policy Self-Distillation for Large Language Models

arXiv - Machine Learning · 4 min ·
[2512.07419] Revolutionizing Mixed Precision Quantization: Towards Training-free Automatic Proxy Discovery via Large Language Models
Llms

[2512.07419] Revolutionizing Mixed Precision Quantization: Towards Training-free Automatic Proxy Discovery via Large Language Models

Abstract page for arXiv paper 2512.07419: Revolutionizing Mixed Precision Quantization: Towards Training-free Automatic Proxy Discovery v...

arXiv - Machine Learning · 4 min ·
[2510.17276] Breaking and Fixing Defenses Against Control-Flow Hijacking in Multi-Agent Systems
Llms

[2510.17276] Breaking and Fixing Defenses Against Control-Flow Hijacking in Multi-Agent Systems

Abstract page for arXiv paper 2510.17276: Breaking and Fixing Defenses Against Control-Flow Hijacking in Multi-Agent Systems

arXiv - Machine Learning · 4 min ·
[2509.25762] OPPO: Accelerating PPO-based RLHF via Pipeline Overlap
Llms

[2509.25762] OPPO: Accelerating PPO-based RLHF via Pipeline Overlap

Abstract page for arXiv paper 2509.25762: OPPO: Accelerating PPO-based RLHF via Pipeline Overlap

arXiv - Machine Learning · 3 min ·
[2508.02833] TIC-GRPO: Provable and Efficient Optimization for Reinforcement Learning from Human Feedback
Llms

[2508.02833] TIC-GRPO: Provable and Efficient Optimization for Reinforcement Learning from Human Feedback

Abstract page for arXiv paper 2508.02833: TIC-GRPO: Provable and Efficient Optimization for Reinforcement Learning from Human Feedback

arXiv - Machine Learning · 4 min ·
[2506.09016] SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning
Llms

[2506.09016] SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning

Abstract page for arXiv paper 2506.09016: SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning

arXiv - Machine Learning · 3 min ·
[2505.23648] Continuous Chain of Thought Enables Parallel Exploration and Reasoning
Llms

[2505.23648] Continuous Chain of Thought Enables Parallel Exploration and Reasoning

Abstract page for arXiv paper 2505.23648: Continuous Chain of Thought Enables Parallel Exploration and Reasoning

arXiv - Machine Learning · 4 min ·
[2603.05280] Layer by layer, module by module: Choose both for optimal OOD probing of ViT
Llms

[2603.05280] Layer by layer, module by module: Choose both for optimal OOD probing of ViT

Abstract page for arXiv paper 2603.05280: Layer by layer, module by module: Choose both for optimal OOD probing of ViT

arXiv - Machine Learning · 3 min ·
[2603.05143] Feature Resemblance: On the Theoretical Understanding of Analogical Reasoning in Transformers
Llms

[2603.05143] Feature Resemblance: On the Theoretical Understanding of Analogical Reasoning in Transformers

Abstract page for arXiv paper 2603.05143: Feature Resemblance: On the Theoretical Understanding of Analogical Reasoning in Transformers

arXiv - Machine Learning · 3 min ·
[2603.05035] Good-Enough LLM Obfuscation (GELO)
Llms

[2603.05035] Good-Enough LLM Obfuscation (GELO)

Abstract page for arXiv paper 2603.05035: Good-Enough LLM Obfuscation (GELO)

arXiv - Machine Learning · 4 min ·
[2603.05026] RepoLaunch: Automating Build&Test Pipeline of Code Repositories on ANY Language and ANY Platform
Llms

[2603.05026] RepoLaunch: Automating Build&Test Pipeline of Code Repositories on ANY Language and ANY Platform

Abstract page for arXiv paper 2603.05026: RepoLaunch: Automating Build&Test Pipeline of Code Repositories on ANY Language and ANY Platform

arXiv - Machine Learning · 3 min ·
[2603.04964] Replaying pre-training data improves fine-tuning
Llms

[2603.04964] Replaying pre-training data improves fine-tuning

Abstract page for arXiv paper 2603.04964: Replaying pre-training data improves fine-tuning

arXiv - Machine Learning · 3 min ·
[2603.04716] SLO-Aware Compute Resource Allocation for Prefill-Decode Disaggregated LLM Inference
Llms

[2603.04716] SLO-Aware Compute Resource Allocation for Prefill-Decode Disaggregated LLM Inference

Abstract page for arXiv paper 2603.04716: SLO-Aware Compute Resource Allocation for Prefill-Decode Disaggregated LLM Inference

arXiv - Machine Learning · 4 min ·
[2603.04480] AbAffinity: A Large Language Model for Predicting Antibody Binding Affinity against SARS-CoV-2
Llms

[2603.04480] AbAffinity: A Large Language Model for Predicting Antibody Binding Affinity against SARS-CoV-2

Abstract page for arXiv paper 2603.04480: AbAffinity: A Large Language Model for Predicting Antibody Binding Affinity against SARS-CoV-2

arXiv - Machine Learning · 3 min ·
[2603.04466] Act-Observe-Rewrite: Multimodal Coding Agents as In-Context Policy Learners for Robot Manipulation
Llms

[2603.04466] Act-Observe-Rewrite: Multimodal Coding Agents as In-Context Policy Learners for Robot Manipulation

Abstract page for arXiv paper 2603.04466: Act-Observe-Rewrite: Multimodal Coding Agents as In-Context Policy Learners for Robot Manipulation

arXiv - Machine Learning · 3 min ·
Previous Page 83 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime