Architectural Choices in China's Open-Source AI Ecosystem: Building Beyond DeepSeek
About this article
A Blog post by Hugging Face on Hugging Face
Back to Articles Architectural Choices in China's Open-Source AI Ecosystem: Building Beyond DeepSeek Team Article Published January 27, 2026 Upvote 43 +37 Adina Yakefu AdinaY Follow huggingface Irene Solaiman irenesolaiman Follow huggingface This is the second blog in a three-part series on China's open source community's historical advancements since January 2025's "DeepSeek Moment." The first blog is available here, and the third blog is available here. In this second piece we turn our focus from models to the architectural and hardware choices Chinese companies have made as openness becomes the norm. For AI researchers and developers contributing to and relying on the open source ecosystem and for policymakers understanding the rapidly changing environment, architectural preferences, modality diversification, license permissiveness, small model popularity, and growing adoption of Chinese hardware point to leadership strategies across a multitude of paths. DeepSeek R1's own characteristics inspired overlap and competition, and contributed to heavier focus on domestic hardware in China. Mixture of Experts (MoE) as the Default Choice In the past year, leading models from the Chinese community had almost unanimously moved toward Mixture-of-Experts (MoE) architectures, including Kimi K2, MiniMax M2, and Qwen3. R1 itself was an MoE model, it also proved a crucial point: strong reasoning could be open, reproducible, and engineered in practice. Under China's real-world constrai...