Many security and operations teams now spend less time wondering whether agentic AI has a place in production and more time determining how to run it securely at scale. A new research report from Dynatrace examines how large organizations are moving agentic AI from pilot projects to real-world environments and where these efforts are stagnating.
The report shows agentic AI already integrated into core business functions, including IT operations, cybersecurity, data processing and customer support. 70% of respondents say they use AI agents in IT operations and systems monitoring, and nearly half run agentic AI in internal and external use cases.
Budgets reflect this momentum. Most respondents expect spending on agentic AI to increase over the next year, with many organizations already investing between $2 million and $5 million per year. Funding levels closely track use cases related to reliability and operational performance.
Pilots with limited production
Adoption of agentic AI remains uneven, although progress is visible. Half of the organizations surveyed say agentic AI projects are in production for limited use cases, and 44% say the projects are widely adopted within some departments. Most teams manage between two and ten active agentic AI projects.
IT operations, cybersecurity and data processing lead production readiness. About half of the projects in these areas are either ongoing or in the process of being operationalized.
The criteria for moving projects forward are centered on technical performance. Data security and privacy come first, followed by the accuracy and reliability of AI results. Monitoring and control mechanisms also play a central role, with many teams considering observability as a prerequisite for wider deployment.
Observability Gaps Slow Progress
Technical barriers remain common. Most cite security, privacy or compliance concerns as blockers. A similar proportion report difficulty managing and monitoring agents at scale. Limited visibility into agent behavior and difficulties in tracing the downstream effects of autonomous actions frequently appear across regions and sectors.
These problems become more pronounced as systems become more interconnected. Agentic AI systems often coordinate across multiple tools, models, and data sources, increasing the need for real-time insights into decisions and execution paths. Without this visibility, teams struggle to diagnose unexpected behavior or connect technical signals to business outcomes.
The report highlights observability as a fundamental control layer. Nearly 70% of respondents already use observability tools when implementing agentic AI, and more than half rely on them during the development and operations phases. Common uses include monitoring training data quality, real-time anomaly detection, validating results, and ensuring compliance.
Humans are still part of the loop
Despite increasing levels of autonomy, human surveillance remains a common practice. More than two-thirds of agentic AI decisions are currently verified by a person. Data quality checks, human review of results, and drift monitoring are the most widely used validation methods.
Only a small portion of organizations create fully autonomous agents without supervision. Most teams develop a mix of autonomous and human-supervised agents, depending on the task and risk profile. Business-oriented applications tend to include higher levels of human involvement than infrastructure-oriented use cases.
Measuring success by reliability
When organizations evaluate the results of agentic AI, reliability and resilience stand out. 60% of respondents say technical performance is their primary indicator of success. Operational efficiency, developer productivity, and customer satisfaction also loom large.
Monitoring methods remain mixed. About half rely on logs, metrics and traces, and almost half still manually examine communication flows between agents. Automated anomaly detection and dashboards appear frequently, although many teams continue to combine automated and manual approaches.
Respondents describe success in terms of systems that maintain performance under stress and recover quickly from failures. Given the speed at which errors can propagate between interconnected agents, early detection and rapid response remain central goals.
Scaling with tighter controls
The report frames the next phase of agentic AI adoption around governance and control. Teams emphasize the need for shared factual signals, standardized metrics, and consistent guardrails that guide autonomous actions. Observability functions as the mechanism that connects these elements throughout the AI lifecycle.
“Organizations are slowing adoption not because they question the value of AI, but because scaling autonomous systems safely requires confidence that those systems will behave reliably and as expected in real-world conditions,” said Alois Reitbauerchief technology strategist at Dynatrace.
Agentic AI deployments expand the operational attack surface and increase reliance on monitoring, validation, and monitoring. As more projects reach production, trust becomes an operational requirement supported by tools, processes and human judgment working in concert.