You can’t audit how AI thinks, but you can audit what it does


In this Help Net Security interview, Wade Bicknell, Head of Security and IT Operations, CFA Instituteexplains how CISOs can use AI while maintaining security and governance. He explains why AI presents both defensive opportunities and emerging risks, and how leaders must balance innovation, control and accountability in cybersecurity.

How should a CISO consider using or guarding against AI/ML systems internally (for fraud detection, threat hunting)?

It’s a two-sided challenge. Many emerging companies are working to integrate AI into their defensive capabilities, from fraud detection to threat hunting, but CISOs must also recognize that adversaries are doing the same.

We’re still in the early days of using AI offensively, but that stage is quickly approaching. AI is a force multiplier, it can accelerate defense, but it can also amplify malicious creativity when combined with human intent.

This means establishing internal boundaries for experimentation, protecting data fed into AI models, and creating early governance frameworks around its use. In short, we must learn to defend ourselves with AI while defending ourselves against it.

How can an organization ensure that AI tools are auditable, explainable, and robust against adversary attacks?

Traditional auditing and monitoring models assume linear, explainable logic, but AI doesn’t think in straight lines. It’s like entrusting a complex task to an autonomous intern who decides how to complete it. You see the result, but not always the process.

This is the main challenge: we often cannot understand how an AI makes a decision. The best we can do is validate that its results match our intent and remain within our ethical and operational boundaries.

Organizations must combine technical transparency (model documentation, data traceability) with operational oversight, human review boards, AI red teaming, and continuous testing to detect drift or adversarial manipulation.

We may never be able to audit the thought process of AI, but we can and should continually audit its outcomes and impact.

What standards or controls should be in place when AI is used in investment operations or member services?

Any organization that allows AI to make investment or client decisions without human oversight accepts significant risks. AI lacks moral and contextual awareness, it does not intuitively understand “do no harm”, “do not mislead” or “act fairly”.

In my own experience using AI for coding and analysis, I have seen how quickly it can “forget” previous guardrails and revert to previous behavior. This is why, in financial or member service contexts, AI must operate under strict governance which includes:

  • Human decision-making for all high-impact actions
  • Set ethical boundaries for what AI can decide or recommend
  • Bias and performance tests at regular intervals
  • Accountability for AI outcomes, including replacement mechanisms

We cannot assume that AI understands the intent behind our requests, we must encode that intent and continually check that it is being respected.

How should companies prepare for audits of AI systems when transparency might conflict with intellectual property constraints or model complexity?

This is one of the most overlooked areas of risk. Many organizations don’t understand what their employees are feeding into AI systems with sensitive data, code, or proprietary logic that can easily leak during everyday use of generative tools.

Until a major event brings the problem to light, many will underestimate the exposure. The best preparation is proactive:

  • Data classification policies defining what can and cannot enter AI systems
  • Internal model registries to track usage, entries, ownership and updates
  • Secure third-party attestations or auditing mechanisms that protect intellectual property while allowing sufficient transparency
  • Training employees on the risks of misuse or leak of AI data

By doing these things and documenting that you do them, you achieve the true purpose of an audit: to prove that controls are in place.

When governance, transparency and accountability are built into operations, you don’t just prepare for audit, you make your organization audit-proof by design.

When it comes to anti-money laundering or fraud detection, how can we ensure that explanations do not reveal sensitive data while remaining usable?

When it comes to anti-money laundering and fraud detection, timing is everything. Explanations are only useful if they prevent a transaction before it happens. AI will boost both payments and fraud, so prevention must come to the forefront of the process.

At the same time, the explainability of the model must balance regulatory and privacy requirements. The most effective approach is tiered disclosure, which provides investigators with enough information (patterns, behavioral anomalies, clusters) to act, but without revealing unnecessary personal or transactional data.

The goal is actionable transparency, giving teams the information they need to act decisively and ethically, without compromising confidentiality or regulatory integrity.

Leave a Reply

Your email address will not be published. Required fields are marked *