I built an LLM proxy that uses differential geometry to detect prompt injection — here’s what actually works (and what doesn’t)
About this article
I’ve spent the last few months building Arc Gate, a monitoring proxy for deployed LLMs. The pitch: one URL change, and you get real-time behavioral monitoring, injection blocking, and a dashboard. I want to share what I learned because most “AI security” tools are vague about their actual performance. The background I’m an independent researcher. I published a five-paper series on a second-order Fisher information manifold (H² × H², R = −4) that predicts a phase transition threshold τ* = √(3/...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket