Machine Learning

ML algorithms, training, and inference

Top This Week

Llms

Associative memory system for LLMs that learns during inference [P]

I've been working on MDA (Modular Dynamic Architecture), an online associative memory system for LLMs. Here's what I learned building it....

Reddit - Machine Learning · 1 min ·
Machine Learning

A comedian’s strategy for poisoning AI training data

Apparently the best defense against AI copying your voice is strawberry mango forklift supersize fries. submitted by /u/bekircagricelik [...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Bias in training data on display in weird way

So i was working on this Tabletop roleplaying game project and for my own amusement I told two different video generating ai models to ge...

Reddit - Artificial Intelligence · 1 min ·

All Content

[2603.06977] NePPO: Near-Potential Policy Optimization for General-Sum Multi-Agent Reinforcement Learning
Machine Learning

[2603.06977] NePPO: Near-Potential Policy Optimization for General-Sum Multi-Agent Reinforcement Learning

Abstract page for arXiv paper 2603.06977: NePPO: Near-Potential Policy Optimization for General-Sum Multi-Agent Reinforcement Learning

arXiv - AI · 4 min ·
[2602.04448] RASA: Routing-Aware Safety Alignment for Mixture-of-Experts Models
Llms

[2602.04448] RASA: Routing-Aware Safety Alignment for Mixture-of-Experts Models

Abstract page for arXiv paper 2602.04448: RASA: Routing-Aware Safety Alignment for Mixture-of-Experts Models

arXiv - AI · 3 min ·
[2602.01554] InfoTok: Information-Theoretic Regularization for Capacity-Constrained Shared Visual Tokenization in Unified MLLMs
Llms

[2602.01554] InfoTok: Information-Theoretic Regularization for Capacity-Constrained Shared Visual Tokenization in Unified MLLMs

Abstract page for arXiv paper 2602.01554: InfoTok: Information-Theoretic Regularization for Capacity-Constrained Shared Visual Tokenizati...

arXiv - AI · 4 min ·
[2601.11609] Auxiliary-predicted Compress Memory Model(ApCM Model): A Neural Memory Storage Model Based on Invertible Compression and Learnable Prediction
Llms

[2601.11609] Auxiliary-predicted Compress Memory Model(ApCM Model): A Neural Memory Storage Model Based on Invertible Compression and Learnable Prediction

Abstract page for arXiv paper 2601.11609: Auxiliary-predicted Compress Memory Model(ApCM Model): A Neural Memory Storage Model Based on I...

arXiv - Machine Learning · 3 min ·
[2601.10940] HOSL: Hybrid-Order Split Learning for Memory-Constrained Edge Training
Llms

[2601.10940] HOSL: Hybrid-Order Split Learning for Memory-Constrained Edge Training

Abstract page for arXiv paper 2601.10940: HOSL: Hybrid-Order Split Learning for Memory-Constrained Edge Training

arXiv - Machine Learning · 4 min ·
[2601.06597] Understanding and inverse design of implicit bias in stochastic learning: a geometric perspective
Machine Learning

[2601.06597] Understanding and inverse design of implicit bias in stochastic learning: a geometric perspective

Abstract page for arXiv paper 2601.06597: Understanding and inverse design of implicit bias in stochastic learning: a geometric perspective

arXiv - Machine Learning · 4 min ·
[2601.03484] From Bits to Chips: An LLM-based Hardware-Aware Quantization Agent for Streamlined Deployment of LLMs
Llms

[2601.03484] From Bits to Chips: An LLM-based Hardware-Aware Quantization Agent for Streamlined Deployment of LLMs

Abstract page for arXiv paper 2601.03484: From Bits to Chips: An LLM-based Hardware-Aware Quantization Agent for Streamlined Deployment o...

arXiv - Machine Learning · 4 min ·
[2601.01162] Bridging the Semantic Gap for Categorical Data Clustering via Large Language Models
Llms

[2601.01162] Bridging the Semantic Gap for Categorical Data Clustering via Large Language Models

Abstract page for arXiv paper 2601.01162: Bridging the Semantic Gap for Categorical Data Clustering via Large Language Models

arXiv - AI · 4 min ·
[2512.17051] SFBD-OMNI: Bridge models for lossy measurement restoration with limited clean samples
Machine Learning

[2512.17051] SFBD-OMNI: Bridge models for lossy measurement restoration with limited clean samples

Abstract page for arXiv paper 2512.17051: SFBD-OMNI: Bridge models for lossy measurement restoration with limited clean samples

arXiv - Machine Learning · 3 min ·
[2512.14471] Kinetic-Mamba: Mamba-Assisted Predictions of Stiff Chemical Kinetics
Machine Learning

[2512.14471] Kinetic-Mamba: Mamba-Assisted Predictions of Stiff Chemical Kinetics

Abstract page for arXiv paper 2512.14471: Kinetic-Mamba: Mamba-Assisted Predictions of Stiff Chemical Kinetics

arXiv - Machine Learning · 4 min ·
[2512.14190] Random-Bridges as Stochastic Transports for Generative Models
Machine Learning

[2512.14190] Random-Bridges as Stochastic Transports for Generative Models

Abstract page for arXiv paper 2512.14190: Random-Bridges as Stochastic Transports for Generative Models

arXiv - Machine Learning · 3 min ·
[2511.20944] Semantic Superiority vs. Forensic Efficiency: A Comparative Analysis of Deep Learning and Psycholinguistics for Business Email Compromise Detection
Machine Learning

[2511.20944] Semantic Superiority vs. Forensic Efficiency: A Comparative Analysis of Deep Learning and Psycholinguistics for Business Email Compromise Detection

Abstract page for arXiv paper 2511.20944: Semantic Superiority vs. Forensic Efficiency: A Comparative Analysis of Deep Learning and Psych...

arXiv - Machine Learning · 4 min ·
[2512.09378] Personalized Federated Distillation Assisted Vehicle Edge Caching Strategy
Machine Learning

[2512.09378] Personalized Federated Distillation Assisted Vehicle Edge Caching Strategy

Abstract page for arXiv paper 2512.09378: Personalized Federated Distillation Assisted Vehicle Edge Caching Strategy

arXiv - Machine Learning · 3 min ·
[2511.17378] A Unified Stability Analysis of SAM vs SGD: Role of Data Coherence and Emergence of Simplicity Bias
Machine Learning

[2511.17378] A Unified Stability Analysis of SAM vs SGD: Role of Data Coherence and Emergence of Simplicity Bias

Abstract page for arXiv paper 2511.17378: A Unified Stability Analysis of SAM vs SGD: Role of Data Coherence and Emergence of Simplicity ...

arXiv - Machine Learning · 3 min ·
[2511.09216] Controllable protein design with particle-based Feynman-Kac steering
Machine Learning

[2511.09216] Controllable protein design with particle-based Feynman-Kac steering

Abstract page for arXiv paper 2511.09216: Controllable protein design with particle-based Feynman-Kac steering

arXiv - Machine Learning · 3 min ·
[2511.08887] FAST-CAD: A Fairness-Aware Framework for Non-Contact Stroke Diagnosis
Machine Learning

[2511.08887] FAST-CAD: A Fairness-Aware Framework for Non-Contact Stroke Diagnosis

Abstract page for arXiv paper 2511.08887: FAST-CAD: A Fairness-Aware Framework for Non-Contact Stroke Diagnosis

arXiv - AI · 4 min ·
[2510.26433] Co-Evolving Latent Action World Models
Machine Learning

[2510.26433] Co-Evolving Latent Action World Models

Abstract page for arXiv paper 2510.26433: Co-Evolving Latent Action World Models

arXiv - Machine Learning · 4 min ·
[2510.23448] An Information-Theoretic Analysis of OOD Generalization in Meta-Reinforcement Learning
Machine Learning

[2510.23448] An Information-Theoretic Analysis of OOD Generalization in Meta-Reinforcement Learning

Abstract page for arXiv paper 2510.23448: An Information-Theoretic Analysis of OOD Generalization in Meta-Reinforcement Learning

arXiv - Machine Learning · 3 min ·
[2510.22068] Deep Gaussian Processes for Functional Maps
Machine Learning

[2510.22068] Deep Gaussian Processes for Functional Maps

Abstract page for arXiv paper 2510.22068: Deep Gaussian Processes for Functional Maps

arXiv - Machine Learning · 3 min ·
[2510.18814] A Model Can Help Itself: Reward-Free Self-Training for LLM Reasoning
Llms

[2510.18814] A Model Can Help Itself: Reward-Free Self-Training for LLM Reasoning

Abstract page for arXiv paper 2510.18814: A Model Can Help Itself: Reward-Free Self-Training for LLM Reasoning

arXiv - AI · 4 min ·
Previous Page 249 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime