Darktrace highlights growing unease as AI agents operate with access to critical data and processes


Darktrace announced the findings of its State of AI Cybersecurity Report 2026examining how the rapid adoption of AI in business is changing the nature of cyber risk for global businesses.

The findings highlight growing concern among cybersecurity professionals about the rise of agentic AI in their organizations, with more than three-quarters (76%) of security professionals surveyed worried about the security implications of integrating AI agents into their organization. This concern is particularly acute at senior levels, with nearly half of security leaders (47%) saying they are very concerned that AI agents are increasingly operating with direct access to sensitive data and critical business processes. At the same time, 97% of security leaders agree that AI integrated into their own security stack significantly strengthens their ability to defend against malicious actors.

Security experts attribute agents’ access to sensitive and proprietary data, their ability to interact directly with critical systems, and the lack of mature governance around their use as primary factors of concern. Data exposure was identified as the top risk (61%), followed by potential violations of data security and privacy regulations (56%) and misuse or abuse of AI tools (51%). Despite growing awareness of AI risks, only 37% of organizations surveyed have a formal policy to deploy AI securely, a drop of 8 percentage points from last year’s report.

“Businesses are rapidly adopting AI, and while AI tools help security teams better defend against attacks, agentic AI introduces a new class of insider risks. » said Issy Richards, VP of Product at Darktrace. “These systems can act at an employee’s fingertips – accessing sensitive data and triggering business processes – without human context or accountability. Our research shows that security leaders are already concerned, and this cannot be treated as an afterthought. If AI agents operate within your organization, their governance, access controls, and oversight are the board’s responsibility, not just a technical responsibility.”

Beyond the manipulation of AI agents, the 2026 State of AI Cybersecurity Report highlights the growing concern that bad actors are using AI to accelerate and intensify cyberattacks. Nearly three-quarters (73%) of security professionals say AI-based threats are already having a significant impact on their organization. Additionally, 87% say AI significantly increases the volume of attacks they face, while 89% say AI makes attacks more sophisticated overall.

Additionally, 91% of professionals note that AI makes phishing and other social engineering attacks more sophisticated and effective. Hyper-personalized phishing emerges as the highest risk AI-based attack (50%), closely followed by automated vulnerability scanning (45%), adaptive malware (40%), and deepfake voice fraud (39%).

Despite widespread recognition of these threats, almost half (46%) admit they do not feel prepared to defend against AI-based attacks, almost unchanged from 45% twelve months ago. At the same time, 92% say these threats are leading to major improvements to their defenses.

As cyber threats accelerate and IT environments become more complex, security teams are increasingly turning to AI as a critical weapon in the fight against cybercrime. More than three-quarters (77%) of security professionals said generative AI is now integrated into their security stack and almost all (96%) said AI significantly increases the speed and efficiency of their work.

Security teams say AI provides its greatest value where human analysts struggle most: detecting new threats and quickly identifying anomalies, with 72% of professionals citing this as the area where AI is having the greatest impact.

Many organizations are already moving beyond AI that simply recommends and toward AI that can act within defined guardrails. In the security operations center, 14% say they allow AI to act independently, while another 70% allow AI to act with human approval; only 13% limit AI to recommendations.

Responding to the trends reflected in the survey results and building on its unique self-learning AI approach to protecting organizations, Darktrace today also unveiled its latest offering, Darktrace / SECURE AI. Darktrace / SECURE AI is designed to give security teams visibility and control over how AI tools and agents are used, what data and systems they can access, and their behavior within the organization. The technology enables security teams to not only manage, but also securely enable the use of AI at scale, across their entire enterprise.

Darktrace data shows that AI adoption is already creating new visibility gaps across the enterprise. In October, Darktrace observed a 39% month-over-month increase in anomalous data uploads to generative AI services. The average abnormal download was 75 MB, which equates to approximately 4,700 pages of documents, significantly increasing the risk of sensitive data slipping out of the organization’s control.

“As AI becomes integrated into core business operations, many organizations are developing a dangerous blind spot,” Richards added. “They no longer have clear visibility into what AI systems can access or how they behave within the enterprise. Darktrace/SECURE AI is not about slowing AI adoption, it’s about giving leaders the visibility and control they need to deploy AI safely, responsibly, and at scale.”

Leave a Reply

Your email address will not be published. Required fields are marked *