When AI Is Forced on Compliance: The ECCP as your Guide | Opinion


Where to start?

I think the answer is simple. You can start with the U.S. Department of Justice’s (DOJ) 2024 Assessment of Corporate Compliance Programs (ECCP).

The ECCP clarifies the following: Prosecutors will evaluate how companies identify, manage and control risks arising from new and emerging technologies, including artificial intelligence, both in business operations and within the compliance programs themselves. But even within this prosecution mandate, the information constitutes a starting point for this management request.

Reframing AI as a DOJ Risk Assessment Issue

The first step is to stop treating AI as a technical deployment and start treating it as a risk assessment requirement. The ECCP makes clear that risk assessments must evolve as internal and external risks change, and it specifically characterizes AI as a technology requiring positive analysis. Prosecutors will ask whether the company has assessed the impact AI could have on compliance with criminal laws, whether AI risks have been integrated into the company’s risk management, and whether there are controls to ensure AI is used only for its intended purposes.

For the CCO, this means formally integrating AI use cases into the compliance risk assessment. If AI affects investigations, surveillance, training, third-party due diligence, or reporting, then it falls directly under the control of the DOJ.

Inventory before writing a policy

The PCEC does not reward ambitious policies not supported by facts. Prosecutors want to understand why a company structured its compliance program this way. Before developing AI governance frameworks, compliance should require a comprehensive inventory of AI use:

  • What tools are deployed or piloted?
  • Which business functions use them;
  • What data they ingest; And
  • Whether the results are advisory or decision-making.

This inventory should explicitly include employee use of generative AI tools. ECCP focuses on managing internal abuse and unintended consequences of technology. Unmanaged use of “shadow AI” is now a compliance failure, not an IT inconvenience.

Focus on decision integrity, not model design

One of the most overlooked ideas of the ECCP is that the DOJ evaluates results and accountability, not technical elegance. When AI is used, prosecutors will ask:

  • What decisions has AI influenced?
  • What basis for human judgment existed? and/or
  • How responsibility was assigned and applied

Compliance officers should therefore focus governance on decisions, not algorithms. If no one can explain how an AI result was evaluated, overturned, or reported, the company cannot demonstrate that its compliance program works in practice. The ECCP explicitly asks what “human decision-making basis” is used to evaluate AI outcomes and how accountability regarding AI use is monitored and enforced. This translates directly to one of the most ubiquitous phrases about AI, the The human in the loop. Yet, it can be considered an internal control in a best practice compliance program. Human controls must be real, documented and authorized.

Demand explainability for boards and regulators

The DOJ does not expect boards to understand machine learning architectures. It expects councils to exercise informed oversight. The ECCP repeatedly asks whether compliance can explain risks, controls and failures to senior management and the board of directors. If a compliance officer cannot explain, in plain language, how AI affects compliance decisions, the program is not defensible. Every hardware AI use case should have a story ready to be integrated:

  • Why AI is used;
  • What risks does this create;
  • Where human judgment intervenes; And
  • How errors are detected and corrected.

It’s not optional. Prosecutors will evaluate the information reviewed by the commissions and how they exercised their oversight.

Integrate AI governance into existing controls

The ECCP warns against “paper programs.” This means that AI governance cannot remain in a separate policy silo. AI-related controls should integrate with existing compliance structures such as investigation protocols, reporting mechanisms, training, internal audit and data governance. If the AI ​​identifies misconduct, how is it reported? If AI supports investigations, how are results preserved and documented? If AI supports training, how is effectiveness measured? DOJ will seek consistency in approach, documentation, and monitoring, not novelty.

Emphasize resources and authority

The ECCP pays particular attention to whether compliance functions have adequate resources, authority and autonomy. Most often this applies to CCOs and compliance professionals. It would be a logical extension that if responsibility for AI governance is assigned to Compliance, then Compliance must have access to data, technical explanations and escalation power. Assigning responsibility without resources is, in the words of the Department of Justice, evidence of a program that is not implemented in good faith. A forced mandate without funding or authority constitutes a compliance risk in itself.

Document the evolution

Finally, compliance officers must document not only controls, but also their progress. The ECCP repeatedly emphasizes continuous improvement, testing and review. AI governance will not be perfect. The DOJ does not expect perfection. It looks for evidence that the company identified risks, tested controls, learned from failures and adjusted. Meeting minutes, pilot reviews, risk assessments and corrective actions are all important. What does this look like? Document Document Document

The essentials

When AI is forced to comply, resistance may be understandable, but ultimately ineffective. The DOJ has already moved beyond the question of whether AI should be governed. The only question that remains is whether the governance is credible. For today’s compliance officer, AI governance is no longer optional, technical or theoretical. This is a live test to determine whether the compliance program is well designed, enabled and effective in practice.

Finally, if you, as a compliance professional, only hear about your organization’s use of AI when this assignment is passed to you, you really need a place at the table.

Leave a Reply

Your email address will not be published. Required fields are marked *