Built GPT-2, Llama 3, and DeepSeek from scratch in PyTorch - open source code + book [p]
About this article
I spent the past year implementing five LLM architectures from scratch in PyTorch and wrote a book documenting the process. What's covered: Vanilla encoder-decoder transformer (English to Hindi translation) GPT-2 (124M), loading real OpenAI pretrained weights Llama 3.2-3B, showing the exact 4 component swaps from GPT-2 (RMSNorm, RoPE, SwiGLU, GQA), loading Meta's pretrained weights KV cache mechanics, MQA, GQA DeepSeek: Multi-Head Latent Attention with absorption trick and decoupled RoPE, Dee...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket