What Cyber Defenders Really Think About AI Risk


There’s an old adage that technology is neither good nor bad, it’s how it’s used that matters. If ever there was a tool to prove this point, it’s AI. On the one hand, AI allows malicious actors to create new threats and launch attacks on an unprecedented scale. On the other hand, it gives security teams entirely new capabilities to strengthen their cyber defenses.

Our Trend Micro Defenders 2025 survey report examines both aspects of AI. Analyzing more than 3,000 responses from 88 countries, it highlights where AI is keeping cyber defenders up at night and where cybersecurity teams see opportunities to improve their security posture with AI tools.

Here are some of the highlights:

Counterfeits and fraud seen as top AI risks

More than a quarter (26%) of respondents told us that defending against AI-based fraud and identity theft was their highest AI risk priority. Other major areas of focus are preventing attacks on AI applications, preventing data and intellectual property leaks through AI tools, and better understanding employee use of AI solutions, whether sanctioned applications or “shadow AI.”

It is fair to say that all of these priorities reflect a general concern about organizations’ understanding and ability to respond.

The good news is that something can be done. Fifteen percent (15%) of respondents said they are already receiving training and education to raise awareness of AI risks. More than 10% of respondents made blocking unauthorized app use a priority. And 7% focus on preventing too much privileged access to information.

Many are considering other tactics and actions to manage AI responsibly and minimize potential risks.

Defenders fight back

Zero Trust architectures, data security management (DPSM), and encryption have all been flagged as ways organizations defend against AI-related threats. Proactive testing is unfortunately less common, with only 6% of respondents saying they regularly conduct AI audits or engage red teams to ensure their cyber protections are as effective as possible.

That said, it’s a positive sign that a growing number of organizations appear to be engaging their cybersecurity teams earlier in the AI ​​adoption process, with 23% engaging security at the discovery stage and 25% at the pilot stage.

This “left shift” is encouraging, although there is still work to be done, which is clearly the case, given that 17% only involve security during implementation when it may be too late, 10% are unsure when security comes into play, and 6% say they are not involved at all.

The other obvious area where security teams can win against AI threats is the adoption of AI tools. Even if this begins to happen, there are a few hurdles that need to be addressed.

Trust in AI for Cybersecurity

Just under 20% of respondents said their organization had not yet started using AI-based cybersecurity tools. About the same proportion said they had nagging concerns about the accuracy and reliability of AI, and slightly fewer cited privacy risks as a reason they might hold back.

Certainly, any team hoping to deploy AI defenses and then go for an extended coffee break should have reservations. AI tools need to be monitored and trained. But the gold is in the training, and the sooner they are used and start learning, the more effective they will become.

Another key to success is ensuring that the use of AI cybersecurity tools and strategies aligns with business needs. Nineteen percent (19%) of respondents said their biggest challenge was identifying relevant and valuable use cases. Doing this requires at least two things: first, having technical and cybersecurity leaders engage in business-focused conversations with executives to uncover where security and business goals overlap; and second, develop a strategic organization-wide cyber risk management practice that can determine where and how AI tools are needed.

AI risk is at the heart of cyber risk management

In case you missed it, our previous blog on the Trend Micro Defenders 2025 Survey Report places these AI risk issues in the broader context of overall cyber risk and how organizations are addressing it. We invite you to consult it and of course to download the full report.

This year’s results clearly show that AI is and will continue to be a key part of cyber risk management in the future. The question is not: “Is AI our friend or our enemy?” The question is: “Where are the biggest AI risks we face, how can we address them, and how can we use AI to turn the tables on bad actors?” »

The answer, as noted in the brief summary of the survey results here, is through greater awareness and training, more mature corporate policies, involving security teams as early as possible in AI adoption, and leveraging the new advanced cybersecurity capabilities that AI has to offer.

In our next blog, we’ll shift our perspective and look at how organizations are perfecting their approach to cloud risk management. Stay tuned.

Next steps

Learn more about ways to manage cloud risks from these additional resources:

Leave a Reply

Your email address will not be published. Required fields are marked *