site-logo
site-logo
site-logo

NSA joins CISA, and others to offer guidance on integrating AI in Operational Technology 

NSA joins CISA, and others to offer guidance on integrating AI in Operational Technology 

NSA joins CISA, and others to offer guidance on integrating AI in Operational Technology 

NSA joins CISA, and others to offer guidance on integrating AI in Operational Technology 

NSA joins CISA and offer guidance
NSA joins CISA and offer guidance
NSA joins CISA and offer guidance
Shieldworkz-logo

Prayukth KV

8. Dezember 2025

NSA joins CISA, and others to offer guidance on integrating AI in Operational Technology 

Recently, the National Security Agency (NSA) along with the Cybersecurity and Infrastructure Security Agency (CISA), the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), and others released the Cybersecurity Information Sheet (CSI), “Principles for the Secure Integration of Artificial Intelligence in Operational Technology.” 

This report describes many ways that AI can be integrated into OT and spells out four basic principles that critical infrastructure owners and operators need to follow to both capitalize on the benefits and minimize the risks of integrating AI into OT environments. The recommended principles detail guidance to understand AI; consider AI use in the OT domain; establish AI governance and assurance frameworks; and embed safety and security practices into AI and AI-enabled OT systems. In today’s blog post, we do a deep dive into the recommended principles.  

Backdrop 

If 2024 was the so called year of AI experimentation, in 2025 we saw a more nuanced and grounded view of AI emerging. This view was more closer to reality and brought forth the unique challenges associated with AI adoption across sectors. With businesses becoming more AI-aware, business leaders started paying more attention to looking at AI in a more holistic manner well beyond the hype. Regulators also stepped in and started issuing guidelines and advisories as well.   

Few days back while we were at an event, CISA along with the NSA, FBI, and cybersecurity agencies drawn from the UK, Australia, Canada, Germany, and New Zealand announced the release of a landmark document called Principles for the Secure Integration of Artificial Intelligence in Operational Technology (OT). We went through it during our flight back.  

This isn't just another compliance checklist. Instead it looks more like a much-needed response to a critical friction point across OT environments. The clash between the probabilistic nature of AI (which "guesses" based on data) and the deterministic nature of Operational Technology (where a valve must close or a turbine must stop) have started overlapping and enterprises were looking at some form of guidance to bottle the genie in some way so that the whole thing doesn’t turn into a runaway fission reaction without any guardrails.  

If you manage OT-based critical infrastructure which includes energy, water, manufacturing, or transport, this guidance could well be your new "Pole Star." Here is a detailed breakdown of the four principles that now define the standard of care for AI in OT as per the document. 

The "Need-to-Know" Principle: Understand Before You Deploy 

To address the unique challenges of integrating AI into OT environments, critical infrastructure owners and operators need to verify that the AI system was designed securely and understand their roles and responsibilities through the AI system’s lifecycle. Similar to hybrid ownership models used with cloud systems, owners and operators must clearly define and communicate these roles and responsibilities with the AI system manufacturer, OT supplier, and any system integrator or managed service provider roles. 

The guidance moves away from "AI for AI's sake." The first step is purely educational and architectural. You cannot secure what you do not understand. 

  • Define the risks: AI introduces risks that traditional OT doesn't face, such as Model Drift (where an AI's accuracy degrades over time as machinery ages) and Hallucination (where an AI invents data patterns). 

  • Skill up: The workforce gap is real. Operators who know how to fix a pump might not know how to debug a neural network. The guidance mandates training personnel not just to use AI, but to understand its failure modes. 

  • Secure the lifecycle: Security starts at the code level. Adopting "Secure by Design" principles means vetting the AI supply chain before a single algorithm touches the plant floor. 

The business case: "do we really need AI for this?" 

Before incorporating an AI system into their OT environment, critical infrastructure owners and operators should assess if AI technologies are the most appropriate solution for their specific needs and requirements compared to other technologies. Critical infrastructure owners and operators should further consider whether an established capability meets their needs before pursuing more complex and novel AI enabled solutions. While AI comes with unique benefits, it is an evolving technology that requires continuous evaluation of risks. 

This assessment should incorporate various factors, including security, performance, complexity, cost, and effect on OT-environment safety depending on the specific application, and assess the benefits and risks of using the AI technologies against the functional requirements the application should meet. Critical infrastructure owners and operators should understand the organization’s current capacity for maintaining an AI system in their OT environment and the potential impact of expanding the environment’s risk surface, such as requiring additional hardware and software for processing data through models or additional security infrastructure to protect the expanded attack surface.  
This is perhaps the most unique aspect of the guidance. It challenges leaders to justify the inclusion of AI. The consensus is clear: Complexity is the enemy of security. 

  • Risk vs. Reward: Does the efficiency gain of an AI thermal sensor outweigh the risk of a cyber-physical attack? If a traditional PID controller works, stick with it. 

  • Data Provenance: In OT, data is truth. If an AI model is trained on bad sensor data, it will make dangerous decisions. You must vet the integrity of the training data just as rigorously as the physical machinery. 

  • Vendor Transparency: The "Black Box" era is over. You must demand transparency from vendors about how their models function, where the data comes from, and who has access to the model weights. 

Governance: Building the guardrails 

Effective governance structures are essential for the safe and secure integration of AI into OT environments. This involves establishing clear policies, procedures, and accountability structures for AI decision-making processes within OT. An AI governance structure should include the key stakeholders listed below, as well as any AI vendors needed for maintaining oversight during procurement, development, design, deployment, and operations. 

Key stakeholders in AI governance mechanisms 

Leadership. Securing commitment from senior leadership, including the CEO and CISO, is essential for establishing a robust AI governance framework. This helps ensure that the organization’s leadership is fully invested in the secure lifecycle management of AI systems and considers AI security risks and mitigations alongside AI functionality. 

OT/IT Subject Matter Experts. Engaging OT, IT, and AI subject matter experts is critical for effective and secure integration of AI systems into OT environments. These experts provide valuable insights into the OT environment and can help identify potential risks and challenges associated with AI integration. 

Cybersecurity Teams. Collaborating with cybersecurity teams is vital for developing policies and procedures that protect sensitive OT data used by AI models. Cybersecurity teams can help identify potential vulnerabilities and provide mitigation recommendations to help maintain the security of the organization’s data. 

You wouldn't let an uncertified engineer run a nuclear plant. You shouldn't let an uncertified model run it, either. 

  • The AI risk register: Treat AI components as distinct assets with their own risk profiles. 

  • Continuous testing: Unlike a physical gear that wears down visibly, an AI model degrades silently. The guidance calls for continuous evaluation, testing the model against "corner cases" (rare, extreme scenarios) to ensure it doesn't fail catastrophically when things go wrong. 

  • Compliance mapping: Don't reinvent the wheel. Map your AI governance into existing frameworks like the NIST CSF or IEC 62443. 

The golden rule: Human-in-the-Loop and fail-safes 

This is the most critical section for safety. The joint agencies are unequivocal: AI should not have the final say on safety-critical actions. 

  • The "Kill Switch": If the AI fails, the system must fall back to a safe, known state (e.g., a hardwired mechanical stop). This "fail-safe" must be independent of the AI. 

  • Human Oversight: For high-consequence decisions, a human must be "in the loop." The AI advises; the operator acts. 

  • Transparency in Ops: When an AI triggers an alert, it must be explainable. An operator needs to know why the AI thinks a pressure spike is imminent, not just that it thinks so. 

The last word

The era of move fast and break things certainly does not apply when you are managing the power grid or the water supply. No one can even think of that 

The CISA joint guidance effectively creates a bifurcated architecture: Use AI to analyze, optimize, and predict in the cloud or at the edge, but keep the physical controls simple, hardwired, and human-governed. 

So what should be your mandate for 2026: Innovate with AI, but anchor and ground it in fundamentals of physical safety that has kept our lights on for a century. 

Planning to use AI for OT and need guidance? Talk to us. 



Wöchentlich erhalten

Ressourcen & Nachrichten

You may also like

BG image

Jetzt anfangen

Skalieren Sie Ihre CPS-Sicherheitslage

Nehmen Sie Kontakt mit unseren CPS-Sicherheitsexperten für eine kostenlose Beratung auf.

BG image

Jetzt anfangen

Skalieren Sie Ihre CPS-Sicherheitslage

Nehmen Sie Kontakt mit unseren CPS-Sicherheitsexperten für eine kostenlose Beratung auf.

BG image

Jetzt anfangen

Skalieren Sie Ihre CPS-Sicherheitslage

Nehmen Sie Kontakt mit unseren CPS-Sicherheitsexperten für eine kostenlose Beratung auf.