U.S enterprises across are rapidly rolling out autonomous artificial intelligence agents that can move data, call APIs and trigger workflows inside core business systems, but most organizations lack the security visibility and governance needed to manage the risks those systems introduce, according to a new industry benchmark released this month.
The State of Agentic AI Security 2025, published by API security firm Akto, finds that 69% of enterprises are already piloting or running AI agents in early production environments. The data signals that agentic AI has moved beyond experimentation and into day-to-day operations. Yet only 21% of organizations maintain a complete and up-to-date inventory of the agents, tools and connections operating inside their environments. That leaves security teams unable to fully see or control autonomous activity.
Akto based the report on a survey of more than 100 verified security and AI leaders across:
- Technology
- Financial services
- Health care
- Manufacturing
- Telecommunications
It describes a widening gap between adoption and readiness as enterprises push toward greater automation.
Akto finds security risks in deploying autonomous AI
Companies are increasingly embedding AI agents across engineering, customer support, operations and internal automation. In each instance, they can invoke tools, interact with other agents and execute actions without human intervention. Each of those capabilities, the report notes, expands the enterprise attack surface in ways that traditional security controls were not designed to handle.
The lack of visibility has emerged as the central challenge. According to the findings, 79% of organizations do not have full insight into which agents are active, what permissions they hold or what systems they can access. Without that baseline understanding, security teams cannot reliably assess risks, enforce policies or investigate incidents with autonomous AI.
“My biggest concerns are visibility and the growing gap between rapid AI development and the security tooling meant to protect it,” said Henri du Plessis, managing security engineer at Toyota Connected North America, in comments included in the report.
Governance shortfalls compound the problem. The survey found that four out of five organizations lack a formal governance policy for AI agents or Model Context Protocol connections. That leaves autonomous systems to operate inside loosely defined or undocumented trust boundaries. The absence of governance frameworks means there are often no consistent standards for agent identity, permissions, approval workflows or monitoring requirements.
The report also points to a broader shift in how AI-related risk manifests inside enterprises. Earlier security discussions focused on prompt manipulation and model output. Agentic AI changes that dynamic by enabling autonomous execution. Agents can now:
- Modify or delete data
- Trigger multi-step workflows across multiple systems
- Invoke external services
- Escalate privileges through chained actions
As a result, 42% of surveyed practitioners said they lack confidence in their ability to secure agent-to-system interactions, according to the report. The most significant risks no longer stem from what an AI system generates, but from what it is allowed to do.
What worries executives about autonomous AI
Survey respondents identified their most pressing concerns as:
- Supply-chain vulnerabilities tied to third-party integrations.
- Data leakage through autonomous actions.
- Uncontrolled agent loops.
- Regulatory exposure from opaque decisions.
Several of these risks, the report notes, have already surfaced in early enterprise deployments. In some cases, it was before organizations were aware that agents were operating beyond expected boundaries.
Despite the scale of deployment, most enterprises have yet to implement continuous controls. The report found that fewer than half of organizations monitor AI activity end-to-end. And fewer have runtime guardrails capable of blocking unsafe actions before they occur. One in five organizations reported having no meaningful monitoring or enforcement mechanisms in place.
“Guardrails are essential for agentic AI security,” said Krantikishor Bora, director of information security risk at GoDaddy, in the report. “They must be thoroughly verified, rigorously tested and strictly enforced.”
How security leaders view agentic AI
Looking ahead, security leaders increasingly view agentic AI as an identity, access and action-control challenge rather than a model-level issue. CISOs surveyed said they are prioritizing capabilities such as auditable action logs, strict execution boundaries, sandboxing, and continuous monitoring as they prepare for broader deployments in 2026.
“Confidence in adopting agentic AI securely starts with governance,” said Venkata Phani Patelkhana, a technical software architect at Dell, according to the report.
The findings suggest that enterprises moving quickly without inventories, governance frameworks or real-time controls risk allowing autonomous systems to operate beyond the line of sight of security teams. As agentic AI becomes embedded across business operations, the report concludes, organizations will need to treat agents not merely as users of systems, but as systems themselves—requiring the same level of oversight, control, and accountability.
Sign up
Sign up for a complimentary subscription to Digital Commerce 360 B2B News. It covers technology and business trends in the growing B2B ecommerce industry. Contact Mark Brohan, senior vice president of B2B and Market Research, at mark@digitalcommerce360.com. Follow him on Twitter @markbrohan. Follow us on LinkedIn, X (formerly Twitter), Facebook and YouTube.
Favorite
