The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), in collaboration with federal and international partners, published on Wednesday a joint cybersecurity guidance for critical infrastructure owners and operators integrating AI (artificial intelligence) into their OT (operational technology) systems. The document outlines four key principles owners and operators can follow to realize the benefits of integrating AI into OT systems while reducing risk. It focuses on machine learning, large language model-based AI, and AI agents because of the complex security considerations and challenges they pose. The guidance also applies to systems augmented with traditional statistical modeling and logic-based automation.
The document ‘Principles for the Secure Integration of Artificial Intelligence in Operational Technology’ outlines essential principles for securely integrating AI into OT systems. It emphasizes educating personnel on AI risks and secure development lifecycles, evaluating business cases for AI adoption, and addressing both immediate and long-term data security risks in OT environments. Additionally, organizations are urged to implement comprehensive governance frameworks to ensure regulatory compliance and continuous testing of AI models. Finally, the document stresses the need for ongoing oversight, transparency, and the integration of AI into incident response plans to safeguard safety and security.
The Purdue Model is still a widely accepted framework for understanding the hierarchical relationships between OT and IT devices and networks. The guidance demonstrates examples of established and potential AI applications in critical infrastructure according to the Purdue Model. ML techniques, such as predictive models, are typically used in operational layers (0–3), while LLMs are typically used in the business context (4–5), potentially on data exported from the OT network.
Level 0 covers field devices such as sensors, actuators, and other components that interact directly with physical processes. These devices generate OT data that can be used to train AI models, particularly predictive machine learning models, or to flag significant deviations that may signal anomalies or emerging issues.
Level 1 includes local controllers, which are systems designed to provide automated regulation for a process, cell, or production line. This category includes devices such as programmable logic controllers and remote terminal units. Some modern PLCs and edge controllers can run lightweight, pre-trained predictive models that support tasks like local anomaly detection, load balancing, and maintaining a known safe state.
Level 2 covers local supervisory systems that provide observation and managerial oversight for a specific process, line, or cell. This includes SCADA systems, distributed control systems, and human-machine interfaces. AI models, largely predictive machine learning models, can analyze data from these supervisory systems to pick up early signs of equipment anomalies and notify operators when corrective action may be needed.
Level 3 involves sitewide supervisory systems that monitor, offer oversight, and operational support across an entire facility or major sections of it. This includes manufacturing execution systems and historians. AI models, most often predictive machine learning models, can analyze aggregated historian data to anticipate maintenance needs and help plan repairs before failures occur. These models can also be integrated into local supervisory tools to offer system recommendations that support operator decision-making, including guidance on operational performance and measurements.
Levels 4 and 5 refer to enterprise and business networks, which include the IT systems responsible for managing corporate processes and decision-making. In critical infrastructure settings, this can involve OT data analysis and autonomous defense capabilities that span both OT and IT environments. AI systems, including agents and large language models, can be applied to improve business workflows, especially where engineering needs intersect with broader business objectives. AI can also analyze OT data alongside IT data to measure operations, detect anomalies and threats, identify hardening opportunities, and generate insight that helps organizations prioritize resiliency decisions.
Principle 1 focuses on understanding AI and its implications for operational technology. This section explains the unique risks that come with integrating AI into OT environments and the potential impact on operations. The document outlines key known risks that critical infrastructure owners and operators should factor into their planning. The list is not comprehensive, and organizations are encouraged to assess risks specific to their own environments. Later sections of the guidance address ways to mitigate these risks, with cross-references provided under Mitigations.
Principle 2 urges organizations to assess how AI fits within the OT domain. Before bringing any AI system into an OT environment, critical infrastructure owners and operators should determine whether AI is truly the right approach for their operational needs and whether it offers advantages over other available technologies. Critical infrastructure owners and operators should further consider whether an established capability meets their needs before pursuing more complex and novel AI-enabled solutions.
While AI comes with unique benefits, it is an evolving technology that requires continuous evaluation of risks. This assessment should incorporate various factors, including security, performance, complexity, cost, and effect on OT-environment safety, depending on the specific application, and assess the benefits and risks of using the AI technologies against the functional requirements the application should meet.
Critical infrastructure owners and operators should understand the organization’s current capacity for maintaining an AI system in their OT environment and the potential impact of expanding the environment’s risk surface, such as requiring additional hardware and software for processing data through models or additional security infrastructure to protect the expanded attack surface. If the assessment indicates an AI system is the best solution, then critical infrastructure owners and operators should follow the secure AI system development lifecycle outlined above and consult AI risk management frameworks, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework, to help ensure the system is used safely and securely.
It also recognizes that OT vendors play a central role in shaping how AI enters OT environments. Some OT devices now ship with built-in AI capabilities, and in certain cases, those features require internet connectivity to operate. Vendors are moving toward two major trends. One is operator-facing AI, where AI capabilities are integrated directly into devices, such as models that predict grid frequency dynamics. The other is the rise of intelligent devices that can use AI to support engineering functions and modify aspects of control.
Critical infrastructure owners and operators should require clear transparency and strong security commitments from vendors about how AI is embedded in their products. This includes negotiating contracts that spell out AI features and functionality, and requiring vendors to explain how AI is incorporated into their products, supported by a software bill of materials and visibility into the supply chain for the models they use. Vendors should also notify operators if they discover that an AI feature can deliver improper guidance or take an inappropriate action.
Operators may not want vendors training AI systems on operational data, since that data may involve intellectual property or other sensitive information, so a data usage policy should govern residency, communication paths, encryption, and storage. Buyers should also ask whether the product can operate on premises or without constant access to the vendor’s cloud. Finally, operators should define when and how specific AI features can be enabled or disabled. Taking these steps gives organizations greater control and helps them manage the risks that come with embedded AI in OT systems.
The third principle focuses on effective governance structures are essential for the safe and secure integration of AI into OT environments. This involves establishing clear policies, procedures, and accountability structures for AI decision-making processes within OT. An AI governance structure should include the key stakeholders, as well as any AI vendors needed for maintaining oversight during procurement, development, design, deployment, and operations.
Key stakeholders play distinct roles in shaping effective AI governance. Senior leadership, including the CEO and CISO, must commit to the effort, since their support is essential for building a strong governance framework and ensuring that AI security risks and mitigations are considered alongside functionality. OT, IT, and AI subject matter experts also need to be involved, as their understanding of the operational environment helps surface risks and integration challenges that may otherwise be missed.
The cybersecurity teams add another layer of protection by developing policies and procedures that safeguard sensitive OT data used by AI models, identifying vulnerabilities, and recommending mitigation measures to keep the organization’s systems and information secure.
Principle 4 focuses on embedding strong oversight and failsafe practices into AI and AI-enabled OT systems. Human responsibility remains central to functional safety, and the tools created, including AI, must support effective oversight and reliable fail-safe behavior. This principle stresses the need to design AI systems that can be monitored, checked, and corrected when necessary. The guidance builds on this by outlining how organizations should establish monitoring and oversight mechanisms for AI used in OT environments, ensuring operators retain visibility and control as these systems evolve.
Critical infrastructure owners and operators should implement oversight of AI-enabled OT systems by taking inventory of any AI components, as well as other components reliant on the AI. Log and monitor inputs and outputs for these components. Also, establish and maintain a known good state or thresholds for safe behavior in an OT environment, allowing for knowledge of when maintenance or restoration should be performed from a backup.
The document establishes key performance indicators (KPIs) that measure AI effectiveness and track progress over time. Critical infrastructure owners and operators should schedule regular review sessions with AI stakeholders, such as vendors, governance boards, and operators, to discuss results, address concerns, and identify areas for improvement.
Commenting on the guidance, Hugh Carroll, vice president of corporate and government affairs at Fortinet wrote in a written statement that “Leading global cybersecurity agencies, including US’s CISA, UK’s NCSC, and Canada’s CCCS, have released much-needed guidance outlining Principles for the Secure Integration of Artificial Intelligence in Operations Technologies (OT). Fortinet is honored to have had the opportunity to contribute to this important effort as we collectively work to best safeguard OT environments from today and tomorrow’s threats.”
“These new principles offer timely and practical guidance to safeguard resilience and security as AI becomes central to modern OT environments,” Marcus Fowler, CEO of Darktrace Federal, said. “It’s encouraging to see a strong focus on behavioral analytics, anomaly detection, and the establishment of safe operating bounds that can identify AI drift, model changes, or emerging security risks before they impact operations. This shift from static thresholds to behavior-based oversight is essential for defending cyber-physical systems where even small deviations can carry significant risk.”
Fowler highlighted that the guidance also encourages caution around LLM-first approaches for making safety decisions in OT environments, based on unpredictability and limited explainability, creating unacceptable risk when human safety and operational continuity are on the line. It’s important to use the right AI for the right job.
“Taken together, these principles reflect a maturing understanding that AI in OT must be paired with continuous monitoring, and transparent and distinct identity controls,” according to Fowler. “We welcome this guidance and remain committed to helping operators put these safeguards into practice to strengthen resilience across critical infrastructure. We continue to see growing recognition of AI’s operational value in cybersecurity, as seen in recent NDAA provisions from bipartisan members of the House Armed Services Committee that emphasize AI-driven anomaly detection, securing operational technology, and incorporating AI into cybersecurity training – a proactive step toward strengthening U.S. cyber readiness.”
Floris Dankaart, lead product manager, managed extended detection and response at cybersecurity consulting firm of NCC Group said that “This global coordination is noteworthy; CISA, Australia’s ACSC, NSA, and other partners are coming together to address a shared challenge. That kind of coordination is rare and signals the importance of this issue. Equally important, most AI-guidance addresses IT, not OT (the systems that keep power grids, water treatment, and industrial processes running). It’s refreshing and necessary to see regulators acknowledge OT-specific risks and provide actionable principles for integrating AI safely in these environments.
“A major challenge will be addressing skill gaps in OT teams, especially where it relates to AI. OT environments are typically much more structured and deterministic than IT environments, which might be at odds with many modern (LLM-based) AI applications,” according to Dankaart. “At the same time, anomaly detection based on machine learning models has been commonplace in OT threat detection and monitoring for some time and remains a key component of the defender’s arsenal.”
He added that “Balancing these factors and getting down to ‘what we really mean’ by AI will be key for critical infrastructure owners. Luckily, some of the best practices in OT and AI use overlap; the idea that you must always have a manual fallback procedure, the ability to operate ‘in island mode’ and human-in-the-loop controls, to name a few.
In conclusion, the guidance identified that the integration of AI into OT presents both opportunities and risks to critical infrastrutture owners and operators. “While AI can enhance efficiency, productivity, and decision-making, it also introduces new challenges that require careful management to support the safety, security, and reliability of OT systems.”
For successful mitigation of the risks of integrating AI into OT systems, it is essential critical infrastructure owners and operators follow the principles in this guidance: understand AI, consider AI use in the OT domain, establish AI governance and assurance frameworks, and embed safety and security practices into AI and AI-enabled OT systems. By adhering to these principles and continuously monitoring, validating, and refining AI models, critical infrastructure owners and operators can achieve a balanced integration of AI into the OT environments that control vital public services.