[2602.20196] OpenPort Protocol: A Security Governance Specification for AI Agent Tool Access

[2602.20196] OpenPort Protocol: A Security Governance Specification for AI Agent Tool Access

arXiv - AI 4 min read Article

Summary

The OpenPort Protocol introduces a governance-first approach for AI agents, ensuring secure access to application tools while addressing key issues like authorization and auditability.

Why It Matters

As AI agents become more prevalent, ensuring secure and controlled access to application tools is critical. OpenPort Protocol aims to standardize governance practices, enhancing security and operational reliability in AI deployments. This is particularly relevant in contexts where data sensitivity and compliance are paramount.

Key Takeaways

  • OpenPort Protocol provides a structured governance framework for AI agent tool access.
  • It emphasizes least-privilege authorization and controlled execution of actions.
  • The protocol includes features for auditability and incident analysis, enhancing security.
  • Risk-gated lifecycle management for write operations ensures human oversight.
  • Operational requirements like admission control and deterministic recovery paths are defined.

Computer Science > Cryptography and Security arXiv:2602.20196 (cs) [Submitted on 22 Feb 2026] Title:OpenPort Protocol: A Security Governance Specification for AI Agent Tool Access Authors:Genliang Zhu, Chu Wang, Ziyuan Wang, Zhida Li, Qiang Li View a PDF of the paper titled OpenPort Protocol: A Security Governance Specification for AI Agent Tool Access, by Genliang Zhu and 4 other authors View PDF HTML (experimental) Abstract:AI agents increasingly require direct, structured access to application data and actions, but production deployments still struggle to express and verify the governance properties that matter in practice: least-privilege authorization, controlled write execution, predictable failure handling, abuse resistance, and auditability. This paper introduces OpenPort Protocol (OPP), a governance-first specification for exposing application tools through a secure server-side gateway that is model- and runtime-neutral and can bind to existing tool ecosystems. OpenPort defines authorization-dependent discovery, stable response envelopes with machine-actionable \texttt{agent.*} reason codes, and an authorization model combining integration credentials, scoped permissions, and ABAC-style policy constraints. For write operations, OpenPort specifies a risk-gated lifecycle that defaults to draft creation and human review, supports time-bounded auto-execution under explicit policy, and enforces high-risk safeguards including preflight impact binding and idempotency. To...

Related Articles

[2512.21106] Semantic Refinement with LLMs for Graph Representations
Llms

[2512.21106] Semantic Refinement with LLMs for Graph Representations

Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations

arXiv - Machine Learning · 4 min ·
[2511.22294] Structure is Supervision: Multiview Masked Autoencoders for Radiology
Machine Learning

[2511.22294] Structure is Supervision: Multiview Masked Autoencoders for Radiology

Abstract page for arXiv paper 2511.22294: Structure is Supervision: Multiview Masked Autoencoders for Radiology

arXiv - Machine Learning · 4 min ·
[2511.18123] Bias Is a Subspace, Not a Coordinate: A Geometric Rethinking of Post-hoc Debiasing in Vision-Language Models
Llms

[2511.18123] Bias Is a Subspace, Not a Coordinate: A Geometric Rethinking of Post-hoc Debiasing in Vision-Language Models

Abstract page for arXiv paper 2511.18123: Bias Is a Subspace, Not a Coordinate: A Geometric Rethinking of Post-hoc Debiasing in Vision-La...

arXiv - Machine Learning · 4 min ·
[2507.14221] Fair Representation in Parliamentary Summaries: Measuring and Mitigating Inclusion Bias
Llms

[2507.14221] Fair Representation in Parliamentary Summaries: Measuring and Mitigating Inclusion Bias

Abstract page for arXiv paper 2507.14221: Fair Representation in Parliamentary Summaries: Measuring and Mitigating Inclusion Bias

arXiv - Machine Learning · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime