In association withthe Deloitte Microsoft Technology Practice
As AI agents increasingly work alongside humans across organizations, companies could be inadvertently opening a new attack surface. Insecure agents can be manipulated to access sensitive systems and proprietary data, increasing enterprise risk.
In some modern enterprises, non-human identities (NHI) are outpacing human identities, and that trend will explode with agentic AI. Solid governance and a fortified security foundation are therefore critical.
According to the Deloitte AI Institute 2026 State of AI report, nearly 74% of companies plan to deploy agentic AI within two years. Yet only one in five (21%) reports having a mature model for governance of autonomous agents. Executives are most concerned with data privacy and security (73%); legal, intellectual property, and regulatory compliance (50%); followed closely by governance capabilities and oversight (46%).
Enterprises may not even realize they are treating agents within their environment as first-class citizens with the keys to the kingdom, creating looming blind spots and potential points of exposure. What is needed is a robust control plane that governs, observes, and secures how AI agents, as well as their tools and models, operate across the enterprise.
“A control plane is the shared, centralized layer governing who can run which agents, with which permissions, under which policies, and using which models and tools,” according to Andrew Rafla, principal, Deloitte Cyber Practice.
“Without a true control plane, you don’t really have the ability to scale agents autonomously—you just have unmanaged execution, and that comes with a lot of risk,” he says. “If you can’t answer what an agent did, on whose behalf, using what data, under what policy—and whether you can reproduce or stop it—you don’t have a functional control plane.”
Governance must make those answers obvious, not aspirational, he says. Governance is what turns AI pilots into production use cases. It’s the bridge that lets companies move from impressive experiments to safe, repeatable, enterprise-wide automation.
Without governance, agent deployments don’t fail safely. They fail unpredictably and at scale.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
An exclusive conversation with OpenAI’s chief scientist, Jakub Pachocki, about his firm's new grand challenge and the future of AI.
Exclusive: Niantic's AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players.
According to Stanford’s 2026 AI Index, AI is sprinting, and we’re struggling to keep up.
Axiom Math is giving away a powerful new AI tool. But it remains to be seen if it speeds up research as much as the company hopes.
Discover special offers, top stories, upcoming events, and more.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
---
**İlgili Kaynaklar:**
Bu alanda profesyonel destek için [GEO eğitim](https://geoakademi.com) sayfasını inceleyebilirsiniz.