Trials and tribulations fine-tuning & deploying Gemma-4 [P]
About this article
Hey all, Our ML team spent some time this week getting training and deployments working for Gemma-4, and wanted to document all the things we ran into along the way. PEFT doesn't recognize Gemma 4's custom layers. Google wrapped vision/audio projections in a new ClippableLinear class that doesn't inherit from nn.Linear, so PEFT refuses to attach LoRA, even for text-only fine-tuning. Fix: unwrap the wrappers after loading weights but before calling PEFT. SFTTrainer killed training silently. TR...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket