Codex is Open Sourcing AI models

Codex is Open Sourcing AI models

Hugging Face Blog 13 min read

About this article

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Back to Articles Codex is Open Sourcing AI models Published December 11, 2025 Update on GitHub Upvote 71 +65 ben burtenshaw burtenshaw Follow shaun smith evalstate Follow Building on our work to get Claude Code to train open source models, we are now getting Codex to go further. We gave Codex access to the Hugging Face Skills repository, which contains skills for Machine Learning and AI tasks such as training or evaluating models. With HF skills, a coding agent can: Fine-tune and apply RL alignment on language models Review, explain, and act on live training metrics from Trackio Evaluate checkpoints and act on evaluation results Create reports from experiments Export to and quantize models with GGUF for local deployment Publish models to the Hub This tutorial dives even deeper and shows you how it works and how to use it yourself. So let's get started. Codex uses AGENTS.md files to accomplish specialized tasks, whilst Claude Code uses 'Skills'. Fortunately, 'HF-skills' is compatible with both approaches and works with major coding agents like Claude Code, Codex, or Gemini CLI. With HF-skills, you can tell Codex something like: Fine-tune Qwen3-0.6B on the dataset open-r1/codeforces-cots And Codex will: Validate your dataset format Select appropriate hardware (t4-small for a 0.6B model) Use and update a training script with Trackio monitoring Submit the job to Hugging Face Jobs Report the job ID and estimated cost Check on progress when you ask Help you debug if something go...

Originally published on February 15, 2026. Curated by AI News.

Related Articles

Granite 4.0 3B Vision: Compact Multimodal Intelligence for Enterprise Documents
Open Source Ai

Granite 4.0 3B Vision: Compact Multimodal Intelligence for Enterprise Documents

A Blog post by IBM Granite on Hugging Face

Hugging Face Blog · 7 min ·
Llms

My AI spent last night modifying its own codebase

I've been working on a local AI system called Apis that runs completely offline through Ollama. During a background run, Apis identified ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Depth-first pruning seems to transfer from GPT-2 to Llama (unexpectedly well)

TL;DR: Removing the right transformer layers (instead of shrinking all layers) gives smaller, faster models with minimal quality loss — a...

Reddit - Artificial Intelligence · 1 min ·
[2603.16430] EngGPT2: Sovereign, Efficient and Open Intelligence
Llms

[2603.16430] EngGPT2: Sovereign, Efficient and Open Intelligence

Abstract page for arXiv paper 2603.16430: EngGPT2: Sovereign, Efficient and Open Intelligence

arXiv - AI · 4 min ·
More in Open Source Ai: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime