AI Infrastructure

GPUs, training clusters, MLOps, and deployment

Top This Week

OpenAI, not yet public, raises $3B from retail investors in monster $122B fund raise | TechCrunch
Ai Infrastructure

OpenAI, not yet public, raises $3B from retail investors in monster $122B fund raise | TechCrunch

OpenAI's latest funding round, led by Amazon, Nvidia, and SoftBank, values the AI lab at $852 billion as it nears an IPO.

TechCrunch - AI · 4 min ·
Machine Learning

[R] Fine-tuning services report

If you have some data and want to train or run a small custom model but don't have powerful enough hardware for training, fine-tuning ser...

Reddit - Machine Learning · 1 min ·
Machine Learning

The AI Chip War is Just Getting Started

Everyone talks about AI models, but the real bottleneck might be hardware. According to a recent study by Roots Analysis: AI chip market ...

Reddit - Artificial Intelligence · 1 min ·

All Content

[2509.22299] HEAPr: Hessian-based Efficient Atomic Expert Pruning in Output Space
Llms

[2509.22299] HEAPr: Hessian-based Efficient Atomic Expert Pruning in Output Space

Abstract page for arXiv paper 2509.22299: HEAPr: Hessian-based Efficient Atomic Expert Pruning in Output Space

arXiv - Machine Learning · 4 min ·
[2509.02391] Gaming and Cooperation in Federated Learning: What Can Happen and How to Monitor It
Ai Infrastructure

[2509.02391] Gaming and Cooperation in Federated Learning: What Can Happen and How to Monitor It

Abstract page for arXiv paper 2509.02391: Gaming and Cooperation in Federated Learning: What Can Happen and How to Monitor It

arXiv - Machine Learning · 4 min ·
[2509.22134] Bridging Draft Policy Misalignment: Group Tree Optimization for Speculative Decoding
Llms

[2509.22134] Bridging Draft Policy Misalignment: Group Tree Optimization for Speculative Decoding

Abstract page for arXiv paper 2509.22134: Bridging Draft Policy Misalignment: Group Tree Optimization for Speculative Decoding

arXiv - AI · 4 min ·
[2509.15888] Distribution-Aligned Decoding for Efficient LLM Task Adaptation
Llms

[2509.15888] Distribution-Aligned Decoding for Efficient LLM Task Adaptation

Abstract page for arXiv paper 2509.15888: Distribution-Aligned Decoding for Efficient LLM Task Adaptation

arXiv - AI · 4 min ·
[2508.02948] Sample-Efficient Distributionally Robust Multi-Agent Reinforcement Learning via Online Interaction
Machine Learning

[2508.02948] Sample-Efficient Distributionally Robust Multi-Agent Reinforcement Learning via Online Interaction

Abstract page for arXiv paper 2508.02948: Sample-Efficient Distributionally Robust Multi-Agent Reinforcement Learning via Online Interaction

arXiv - Machine Learning · 3 min ·
[2509.13574] Dense-Jump Flow Matching with Non-Uniform Time Scheduling for Robotic Policies: Mitigating Multi-Step Inference Degradation
Machine Learning

[2509.13574] Dense-Jump Flow Matching with Non-Uniform Time Scheduling for Robotic Policies: Mitigating Multi-Step Inference Degradation

Abstract page for arXiv paper 2509.13574: Dense-Jump Flow Matching with Non-Uniform Time Scheduling for Robotic Policies: Mitigating Mult...

arXiv - AI · 4 min ·
[2507.06567] SlimCaching: Edge Caching of Mixture-of-Experts for Distributed Inference
Llms

[2507.06567] SlimCaching: Edge Caching of Mixture-of-Experts for Distributed Inference

Abstract page for arXiv paper 2507.06567: SlimCaching: Edge Caching of Mixture-of-Experts for Distributed Inference

arXiv - Machine Learning · 4 min ·
[2509.05608] BinaryShield: Cross-Service Threat Intelligence in LLM Services using Privacy-Preserving Fingerprints
Llms

[2509.05608] BinaryShield: Cross-Service Threat Intelligence in LLM Services using Privacy-Preserving Fingerprints

Abstract page for arXiv paper 2509.05608: BinaryShield: Cross-Service Threat Intelligence in LLM Services using Privacy-Preserving Finger...

arXiv - Machine Learning · 4 min ·
[2509.04784] Post-training Large Language Models for Diverse High-Quality Responses
Llms

[2509.04784] Post-training Large Language Models for Diverse High-Quality Responses

Abstract page for arXiv paper 2509.04784: Post-training Large Language Models for Diverse High-Quality Responses

arXiv - AI · 3 min ·
[2508.06526] PiKV: KV Cache Management System for Mixture of Experts
Llms

[2508.06526] PiKV: KV Cache Management System for Mixture of Experts

Abstract page for arXiv paper 2508.06526: PiKV: KV Cache Management System for Mixture of Experts

arXiv - AI · 4 min ·
[2506.15307] SecP-Tuning: Efficient Privacy-Preserving Prompt Tuning for Large Language Models via MPC
Llms

[2506.15307] SecP-Tuning: Efficient Privacy-Preserving Prompt Tuning for Large Language Models via MPC

Abstract page for arXiv paper 2506.15307: SecP-Tuning: Efficient Privacy-Preserving Prompt Tuning for Large Language Models via MPC

arXiv - Machine Learning · 4 min ·
[2508.04663] HierarchicalPrune: Position-Aware Compression for Large-Scale Diffusion Models
Machine Learning

[2508.04663] HierarchicalPrune: Position-Aware Compression for Large-Scale Diffusion Models

Abstract page for arXiv paper 2508.04663: HierarchicalPrune: Position-Aware Compression for Large-Scale Diffusion Models

arXiv - AI · 4 min ·
[2506.14003] Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs
Llms

[2506.14003] Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs

Abstract page for arXiv paper 2506.14003: Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs

arXiv - Machine Learning · 4 min ·
[2507.06996] Generating Multi-Table Time Series EHR from Latent Space with Minimal Preprocessing
Ai Infrastructure

[2507.06996] Generating Multi-Table Time Series EHR from Latent Space with Minimal Preprocessing

Abstract page for arXiv paper 2507.06996: Generating Multi-Table Time Series EHR from Latent Space with Minimal Preprocessing

arXiv - Machine Learning · 3 min ·
[2506.18841] LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning
Llms

[2506.18841] LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning

Abstract page for arXiv paper 2506.18841: LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning

arXiv - Machine Learning · 4 min ·
[2506.18110] RL for Reasoning by Adaptively Revealing Rationales
Machine Learning

[2506.18110] RL for Reasoning by Adaptively Revealing Rationales

Abstract page for arXiv paper 2506.18110: RL for Reasoning by Adaptively Revealing Rationales

arXiv - Machine Learning · 4 min ·
[2506.10085] VITA: Zero-Shot Value Functions via Test-Time Adaptation of Vision-Language Models
Llms

[2506.10085] VITA: Zero-Shot Value Functions via Test-Time Adaptation of Vision-Language Models

Abstract page for arXiv paper 2506.10085: VITA: Zero-Shot Value Functions via Test-Time Adaptation of Vision-Language Models

arXiv - AI · 4 min ·
[2505.16122] Plan and Budget: Effective and Efficient Test-Time Scaling on Reasoning Large Language Models
Llms

[2505.16122] Plan and Budget: Effective and Efficient Test-Time Scaling on Reasoning Large Language Models

Abstract page for arXiv paper 2505.16122: Plan and Budget: Effective and Efficient Test-Time Scaling on Reasoning Large Language Models

arXiv - Machine Learning · 4 min ·
[2506.09427] A High-Quality Dataset and Reliable Evaluation for Interleaved Image-Text Generation
Machine Learning

[2506.09427] A High-Quality Dataset and Reliable Evaluation for Interleaved Image-Text Generation

Abstract page for arXiv paper 2506.09427: A High-Quality Dataset and Reliable Evaluation for Interleaved Image-Text Generation

arXiv - AI · 4 min ·
[2504.14960] MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core
Machine Learning

[2504.14960] MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core

Abstract page for arXiv paper 2504.14960: MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Tr...

arXiv - Machine Learning · 4 min ·
Previous Page 48 Next

Related Topics

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime