Machine Learning

ML algorithms, training, and inference

Top This Week

Llms

[P] Remote sensing foundation models made easy to use.

This project enables the idea of tasking remote sensing models to acquire embeddings like we task satellites to acquire data! https://git...

Reddit - Machine Learning · 1 min ·
Machine Learning

Can AI truly be creative?

AI has no imagination. “Creativity is the ability to generate novel and valuable ideas or works through the exercise of imagination” http...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

AI video generation seems fundamentally more expensive than text, not just less optimized

There’s been a lot of discussion recently about how expensive AI video generation is compared to text, and it feels like this is more tha...

Reddit - Artificial Intelligence · 1 min ·

All Content

[2402.05122] History of generative Artificial Intelligence (AI) chatbots: past, present, and future development
Machine Learning

[2402.05122] History of generative Artificial Intelligence (AI) chatbots: past, present, and future development

Abstract page for arXiv paper 2402.05122: History of generative Artificial Intelligence (AI) chatbots: past, present, and future development

arXiv - AI · 4 min ·
[2603.25737] Training the Knowledge Base through Evidence Distillation and Write-Back Enrichment
Machine Learning

[2603.25737] Training the Knowledge Base through Evidence Distillation and Write-Back Enrichment

Abstract page for arXiv paper 2603.25737: Training the Knowledge Base through Evidence Distillation and Write-Back Enrichment

arXiv - AI · 3 min ·
[2603.24916] Once-for-All Channel Mixers (HYPERTINYPW): Generative Compression for TinyML
Machine Learning

[2603.24916] Once-for-All Channel Mixers (HYPERTINYPW): Generative Compression for TinyML

Abstract page for arXiv paper 2603.24916: Once-for-All Channel Mixers (HYPERTINYPW): Generative Compression for TinyML

arXiv - Machine Learning · 4 min ·
[2603.24883] Learning to Staff: Offline Reinforcement Learning and Fine-Tuned LLMs for Warehouse Staffing Optimization
Llms

[2603.24883] Learning to Staff: Offline Reinforcement Learning and Fine-Tuned LLMs for Warehouse Staffing Optimization

Abstract page for arXiv paper 2603.24883: Learning to Staff: Offline Reinforcement Learning and Fine-Tuned LLMs for Warehouse Staffing Op...

arXiv - Machine Learning · 4 min ·
[2603.25720] R-C2: Cycle-Consistent Reinforcement Learning Improves Multimodal Reasoning
Machine Learning

[2603.25720] R-C2: Cycle-Consistent Reinforcement Learning Improves Multimodal Reasoning

Abstract page for arXiv paper 2603.25720: R-C2: Cycle-Consistent Reinforcement Learning Improves Multimodal Reasoning

arXiv - AI · 3 min ·
[2603.24844] Reaching Beyond the Mode: RL for Distributional Reasoning in Language Models
Llms

[2603.24844] Reaching Beyond the Mode: RL for Distributional Reasoning in Language Models

Abstract page for arXiv paper 2603.24844: Reaching Beyond the Mode: RL for Distributional Reasoning in Language Models

arXiv - AI · 4 min ·
[2603.25719] Agent Factories for High Level Synthesis: How Far Can General-Purpose Coding Agents Go in Hardware Optimization?
Machine Learning

[2603.25719] Agent Factories for High Level Synthesis: How Far Can General-Purpose Coding Agents Go in Hardware Optimization?

Abstract page for arXiv paper 2603.25719: Agent Factories for High Level Synthesis: How Far Can General-Purpose Coding Agents Go in Hardw...

arXiv - Machine Learning · 4 min ·
[2603.25551] Voxtral TTS
Machine Learning

[2603.25551] Voxtral TTS

Abstract page for arXiv paper 2603.25551: Voxtral TTS

arXiv - AI · 5 min ·
[2603.25633] Is Mathematical Problem-Solving Expertise in Large Language Models Associated with Assessment Performance?
Llms

[2603.25633] Is Mathematical Problem-Solving Expertise in Large Language Models Associated with Assessment Performance?

Abstract page for arXiv paper 2603.25633: Is Mathematical Problem-Solving Expertise in Large Language Models Associated with Assessment P...

arXiv - AI · 4 min ·
[2603.24828] A Practical Guide Towards Interpreting Time-Series Deep Clinical Predictive Models: A Reproducibility Study
Machine Learning

[2603.24828] A Practical Guide Towards Interpreting Time-Series Deep Clinical Predictive Models: A Reproducibility Study

Abstract page for arXiv paper 2603.24828: A Practical Guide Towards Interpreting Time-Series Deep Clinical Predictive Models: A Reproduci...

arXiv - AI · 4 min ·
[2603.25415] Modernising Reinforcement Learning-Based Navigation for Embodied Semantic Scene Graph Generation
Machine Learning

[2603.25415] Modernising Reinforcement Learning-Based Navigation for Embodied Semantic Scene Graph Generation

Abstract page for arXiv paper 2603.25415: Modernising Reinforcement Learning-Based Navigation for Embodied Semantic Scene Graph Generation

arXiv - AI · 4 min ·
[2603.24790] Local learning for stable backpropagation-free neural network training towards physical learning
Machine Learning

[2603.24790] Local learning for stable backpropagation-free neural network training towards physical learning

Abstract page for arXiv paper 2603.24790: Local learning for stable backpropagation-free neural network training towards physical learning

arXiv - Machine Learning · 3 min ·
[2603.25498] EcoThink: A Green Adaptive Inference Framework for Sustainable and Accessible Agents
Llms

[2603.25498] EcoThink: A Green Adaptive Inference Framework for Sustainable and Accessible Agents

Abstract page for arXiv paper 2603.25498: EcoThink: A Green Adaptive Inference Framework for Sustainable and Accessible Agents

arXiv - AI · 3 min ·
[2603.24780] Transformers in the Dark: Navigating Unknown Search Spaces via Bandit Feedback
Llms

[2603.24780] Transformers in the Dark: Navigating Unknown Search Spaces via Bandit Feedback

Abstract page for arXiv paper 2603.24780: Transformers in the Dark: Navigating Unknown Search Spaces via Bandit Feedback

arXiv - Machine Learning · 4 min ·
[2603.25480] Retraining as Approximate Bayesian Inference
Machine Learning

[2603.25480] Retraining as Approximate Bayesian Inference

Abstract page for arXiv paper 2603.25480: Retraining as Approximate Bayesian Inference

arXiv - AI · 3 min ·
[2603.24753] Light Cones For Vision: Simple Causal Priors For Visual Hierarchy
Machine Learning

[2603.24753] Light Cones For Vision: Simple Causal Priors For Visual Hierarchy

Abstract page for arXiv paper 2603.24753: Light Cones For Vision: Simple Causal Priors For Visual Hierarchy

arXiv - Machine Learning · 3 min ·
[2603.25450] Cross-Model Disagreement as a Label-Free Correctness Signal
Llms

[2603.25450] Cross-Model Disagreement as a Label-Free Correctness Signal

Abstract page for arXiv paper 2603.25450: Cross-Model Disagreement as a Label-Free Correctness Signal

arXiv - AI · 4 min ·
[2603.24744] Contrastive Learning Boosts Deterministic and Generative Models for Weather Data
Machine Learning

[2603.24744] Contrastive Learning Boosts Deterministic and Generative Models for Weather Data

Abstract page for arXiv paper 2603.24744: Contrastive Learning Boosts Deterministic and Generative Models for Weather Data

arXiv - Machine Learning · 4 min ·
[2603.25412] Beyond Content Safety: Real-Time Monitoring for Reasoning Vulnerabilities in Large Language Models
Llms

[2603.25412] Beyond Content Safety: Real-Time Monitoring for Reasoning Vulnerabilities in Large Language Models

Abstract page for arXiv paper 2603.25412: Beyond Content Safety: Real-Time Monitoring for Reasoning Vulnerabilities in Large Language Models

arXiv - AI · 4 min ·
[2603.25379] Does Structured Intent Representation Generalize? A Cross-Language, Cross-Model Empirical Study of 5W3H Prompting
Machine Learning

[2603.25379] Does Structured Intent Representation Generalize? A Cross-Language, Cross-Model Empirical Study of 5W3H Prompting

Abstract page for arXiv paper 2603.25379: Does Structured Intent Representation Generalize? A Cross-Language, Cross-Model Empirical Study...

arXiv - AI · 4 min ·
Previous Page 91 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime