From Demo to Production: Rethinking optimized LLM Inference at Scale with llm-d on OCI Oracle Blogs