[P] Trained a small BERT on 276K Kubernetes YAMLs using tree positional encoding instead of sequential
About this article
I trained a BERT-style transformer on 276K Kubernetes YAML files, replacing standard positional encoding with learned tree coordinates (depth, sibling index, node type). The model uses hybrid bigram/trigram prediction targets to learn both universal structure and kind-specific patterns — 93/93 capability tests passing. Interesting findings: learned depth embeddings are nearly orthogonal (categorical, not smooth like sine/cosine), and 28/48 attention heads specialize on same-depth attention (u...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket