site-logo
site-logo
site-logo

The year the plant manager started talking about ransomware

The year the plant manager started talking about ransomware

The year the plant manager started talking about ransomware

OT Security trends
Shieldworkz logo

Prayukth K V

What trends are driving OT security everywhere in 2026.

In today’s blog post, I talk about how the first four months of 2026 have changed OT security forever.

Till a few months back, there was a specific kind of conversation that used to happen in industrial facilities everywhere. One would walk into a site assessment with a laptop and a clipboard, ask the engineering team about their control system inventory, and watch the room politely disengage. The engineers have had conversations with IT security people before. They had endured firewall discussions, vulnerability scanner demonstrations, and network diagram reviews that bore absolutely no resemblance to the actual production environment they were managing. They were willing to cooperate largely because corporate compliance required it  and once done, they would return to the work that kept the lights on, the product flowing, and the process running safely. These conversations were forgotten in its entirety by OT engineers.

That conversation has now fundamentally changed.

Before we move forward, don’t. forget to download the latest edition of our OT security threat landscape report from here.

A few months ago, I was at a midstream natural gas facility as part of a fairly routine architecture assessment. Before I could initiate the process, the plant manager some who had spent twenty-three years on the job and someone who knew every valve and compressor on that site by sound, pulled up a news article on his phone about an industrial cyberattack in a different sector. He set the phone down on the table and calmly asked: "How long does it take to recover a PLC if someone wipes the firmware?"

Not "are we compliant with the our compliance mandates." Not "does our SCADA system have antivirus." Or even “till when are you here.” He wanted to know recovery time, because he understood that this was a production problem first and a cybersecurity problem second.

Such questions indicate how much the landscape has changed.   

The compliance era left blind spots we are still paying for

To understand where we are, you have to understand how we reached here. For the better part of two decades, OT security was largely driven by regulatory compliance. NERC CIP in the electric sector. CFATS in the chemical sector. Various API and ISO standards in oil and gas. These frameworks did important work. They forced organizations to build device inventories, define access controls, and think about electronic security perimeters. Without them, the baseline would have been even lower than where it is today.

But compliance frameworks carry an inherent and fundamental structural limitation. They define a floor and not a ceiling. In OT security, the floor and the ceiling were far too often confused.

An organization that satisfied every requirement of its NERC CIP audit was considered secure by the framework's own definition. But NERC CIP CIP-007 asks whether you know what ports and services are enabled on your BES Cyber System components. It does not ask whether an adversary with valid remote access credentials and knowledge of your network topology (possibly gleaned from a GPT) can traverse from your enterprise network to your turbine control system without touching a any part of the monitored boundary. That gap, between compliance and actual risk, is precisely where the threat actors thrived undetected for years.  

The Colonial Pipeline attack in May 2021 became the inflection point the industry had been dreading. The ransomware did not directly compromise Colonial's operational technology instead, the attack was primarily contained to the IT network. But Colonial's operators could not confirm with confidence what had or had not been affected on the OT side, and faced with that uncertainty, they made the only defensible decision available to them: they shut down 5,500 miles of pipeline. Nearly half of the East Coast's fuel supply, halted not because the operational technology was definitively compromised, but because nobody could be certain it wasn't and no one knew to what systems were impacted.

That uncertainty, or more specifically that fundamental lack of verified visibility, was the real vulnerability. And it cost orders of magnitude more than any ransom payment would have.

An OT incident is something else altogether

This is the thing that IT security professionals sometimes struggle to internalize until they have sat through a real OT incident. An OT security event is not just a data breach with a production floor attached.

When a hospital suffers a ransomware attack, the immediate consequences are operational disruption, potential patient safety risk, and data loss. All serious and requiring an urgent response. However the hospital can often route around the problem by implementing paper-based processes, divert patients to other facilities, restore critical systems from clean backups. But these are not easy steps to take especially in the middle of a crisis when half of the teams are already tied-up.

On the other hand, when a chlorination system at a water treatment facility receives an unauthorized command to alter chemical feed rates, there is no routing around the problem. There is no graceful degradation. The chemistry is the product. The physics is the process.  

When the Triton malware was deployed against the safety instrumented systems at a petrochemical facility in the Middle East in 2017, the adversaries were not after data. They were not after money. They were attempting to disable the last line of automated safety protection — the system specifically engineered to bring a dangerous process to a safe state when all other controls fail with the apparent intent of permitting a process to run to a catastrophic physical outcome. That threat model has no analogue in the IT security playbook.

And then there is NotPetya. The June 2017 attack, widely attributed to Russian military intelligence, was nominally targeted at Ukrainian financial infrastructure. But it spread indiscriminately across global corporate networks. Maersk, the shipping and logistics company, lost essentially its entire global network including the systems running port terminal operations worldwide. The estimated damages to Maersk alone were in the range of $300 million. Merck, Mondelez, and dozens of manufacturers discovered that their operational and enterprise environments were far less isolated than their architecture diagrams had led them to believe. Nobody had modeled the blast radius.

The silent evolution: From asset visibility to attack-path thinking

Here is where the field has genuinely matured in a way that deserves recognition, and where the most consequential work is happening right now.

For years, the stated objective of OT network security was asset visibility. Know what devices exist on your network. Understand what protocols they speak. Detect anomalies against a behavioral baseline. This was the core value proposition of every passive OT monitoring deployment, every industrial NDR platform, every asset discovery initiative. And it remains foundational, non-negotiable work. You cannot protect what you cannot see.

But asset visibility alone is just passive knowledge. It tells you what exists. It does not tell you what an adversary would do with what exists or how they will target it.

Attack-path thinking asks a structurally different question: given every asset discovered, every network connection mapped, and every vulnerability identified, what is the most viable path from an initial access point to the highest-consequence target? Not "do I have a PLC on my network?" but "if an adversary gains initial access to my Historian server, what is the sequence of lateral movement steps, through which protocol, across which trust relationship, exploiting which authentication weakness that leads from that Historian to my safety instrumented system controller?"

This is not academic threat modeling. It is operational security design, and the adversaries targeting industrial environments have been practicing it with discipline for years.

The adversary groups that specialize in industrial control system environments tracked and characterized by organizations like Shieldworkz and government agencies including CISA — demonstrate in their tactics a prior, detailed knowledge of their targets' architectures. The Industroyer2 malware deployed against Ukrainian high-voltage substations in April 2022 contained hardcoded targeting of IEC 104 communications a specific industrial automation protocol used in power systems. That capability does not emerge from a generic intrusion toolkit. It reflects adversaries who studied the target environment, understood its operational protocols, and built their attack around the attack path they had already mapped.

The CISA advisories and joint government publications regarding Volt Typhoon throughout 2023 and into 2024 described something equally concerning but different in character: a Chinese state-sponsored actor conducting long-term, deliberate pre-positioning within US critical infrastructure networks, specifically choosing living-off-the-land techniques to avoid generating the kind of anomalous activity that conventional monitoring would flag. The goal was not immediate disruption. It was the quiet establishment of persistent access — attack paths held in reserve, to be activated at a moment of strategic choice.

If your security program is oriented primarily around malware signatures and policy violations, you will not detect this category of adversary. If your program includes comprehensive OT asset visibility, protocol-level behavioral analysis, and — critically — a mapped understanding of which assets represent viable stepping stones toward your most critical operational technology, you have a genuine ability to detect and respond to it.

What boards need to understand

The boardroom conversation has changed, but it has not always changed in the right direction. A new pattern has emerged: executives attend a cybersecurity briefing, absorb the message that OT attacks are real and consequential, and leave with a vague mandate to "do something." Budgets are allocated. Monitoring platforms are purchased. Consultants are retained. And eighteen months later, the organization has a well-configured asset inventory, a populated dashboard, and no operationalized understanding of what to do at 2 AM on a Saturday when that dashboard fires a critical alert.

The right board question is not "do we have an OT security program?" The right question is: "If our primary compressor control system at our largest facility became unavailable tomorrow morning, what is our time-to-safe-state, what is our production impact per hour of downtime, and how long before we can return to full operations with verified confidence in the system's integrity?"

If your security leadership cannot answer that question with specificity, the board should be uncomfortable. If your operations engineering team has never run through that scenario in a structured exercise, your incident response capability has a gap that no technology purchase will close.

The insurance industry understood this calculus before much of the security community did. Cyber insurance underwriters reviewing OT environments began asking, with increasing technical specificity: What is the segmentation architecture between your enterprise network and your Level 2 control network? Can an operator initiate a remote HMI session from a personal device? What is the restoration process for a PLC configuration following a firmware corruption event, and how long does it take? These are not generic cybersecurity questions. These are questions about recovery time objectives for physical process control, and the answers directly determine the actuarial risk model and, consequently, the premium.

The organizations that answer those questions with confidence and precision are the ones that have done the hard, unglamorous work — not just technology deployment, but consequence modeling and recovery engineering.

The geopolitical dimension practitioners can no longer ignore

I want to address something that the technical community sometimes resists acknowledging directly: OT security is now an explicit national security concern, and operating as though that context doesn't belong in your threat model is a professional error.

The attacks on Ukrainian power distribution infrastructure in December 2015 and December 2016 — executed using BlackEnergy and then the significantly more sophisticated Industroyer platform — were not criminal campaigns. They were military operations against civilian critical infrastructure, designed to cause physical consequences in the civilian population during an active armed conflict. The 2023 attacks on US water utilities attributed to actors affiliated with Iran's IRGC — targeting facilities running specific models of Unitronics PLCs, leaving visible messages on operator screens — were calibrated demonstrations. The message was clear: this infrastructure is reachable, we know your equipment, and we are choosing to show you rather than harm you. That restraint is not guaranteed to persist under different geopolitical conditions.

Governments have responded with uncharacteristic urgency. The TSA Pipeline Security Directives issued in the wake of the Colonial Pipeline attack mandated network segmentation, monitoring, and incident response capabilities for pipeline operators in a way that would have been politically inconceivable a decade earlier. The European Union's NIS2 Directive expanded mandatory cybersecurity obligations to operators of essential services across energy, water, manufacturing, and transport in ways that directly implicate OT security programs. CISA's cross-sector OT security guidance has become meaningfully more technically specific with each iteration.

This is not bureaucratic overhead. This is governments formally recognizing that manufacturing capacity, energy delivery infrastructure, and water supply are strategic national assets whose resilience is a defense matter as much as a business continuity matter.

For the practitioner in the field, the practical implication is this: your threat model can no longer be defined solely by criminal ransomware groups chasing a financial return. You are potentially within the targeting calculus of state-sponsored actors with long operational time horizons, persistent access objectives, and an interest in holding options for physical disruption at a scale and timing of their strategic choosing. Your architecture, your monitoring, and your attack-path modeling need to be built for that reality — even if you spend your entire career hoping you never need it.

Where the real work is happening

The technology has improved substantially. The leading OT network detection and response platforms can now do things that would have been remarkable a decade ago: passive protocol-native decoding of industrial communications across Modbus, DNP3, EtherNet/IP, PROFINET, IEC 61850, Siemens S7, and others; behavioral anomaly detection calibrated to industrial process cycles rather than IT traffic patterns; vulnerability correlation against specific firmware versions using ICS-CERT advisories and vendor security bulletins; and integration with security operations center platforms that carry asset criticality context so that the analyst receiving an alert understands what kind of device is involved and what the consequence of an event on that device might be.

The platforms are not the problem.

The gap is in operationalization. An OT monitoring sensor that generates an alert for anomalous DNP3 traffic at a remote terminal unit at 2 AM on a holiday weekend does not help you if your SOC analyst doesn't know what DNP3 is, your incident response procedure says "escalate to IT security," and nobody has mapped that RTU to the physical process it controls or documented what the consequence of an unauthorized command to it would look like in operational terms.

The organizations making genuine, durable progress are the ones doing the unglamorous work. Sustained physical walkdowns to validate that network topology diagrams reflect what is cabled in the field because they almost never do, especially after years of incremental modifications. Joint tabletop exercises where control system engineers and security analysts work through the same attack scenario from their different knowledge bases and discover in a low-stakes environment how much they each don't understand about the other's world. Criticality mapping that makes it from the P&ID and the engineering design documentation all the way through to the information available to a SOC analyst right now of an alert. Documented and tested recovery procedures and not just documented, but physically tested, because there is no substitute for discovering that your PLC backup restoration procedure takes four hours instead of forty-five minutes before you are in the middle of an active incident.

That is where the security actually resides. Not in the platform. In the people who understand both the industrial process and the adversary well enough to act on the information the platform surfaces.

The psychological shift

I return to the plant manager with the phone. Twenty-three years on the job, responsible for a facility that processes millions of cubic feet of natural gas daily, who knows his equipment the way a surgeon knows anatomy. Asking about PLC recovery times.

That conversation didn't happen because an NDR sensor was installed last quarter. It happened because a chain of events including pipeline shutdowns visible on the national news, water supply incidents making the front page, manufacturing facilities going dark from infections that started in accounting, government directives arriving with enforcement teeth, insurance renewals requiring answers to uncomfortable questions accumulated enough weight that the psychological abstraction of "cyber risk" finally collapsed into the operational concrete of "my process, my equipment, my people, my responsibility."

That shift is more strategically valuable than any technology deployment. You can install the most capable OT monitoring platform available in a facility where operations doesn't understand why it matters, and it will generate alerts that nobody acts on, dashboards that nobody reads, and reports that sit in a SharePoint folder until the annual compliance audit. You can have a facility where the engineering team has genuinely internalized the threat model, and they will compensate for technology limitations, identify anomalies through process intuition, and make better decisions under pressure than any tool can prompt.

The security practitioner's role in this environment is not purely technical. It is substantially translation work between the threat intelligence community and the operations engineering community, between the compliance requirement and the actual underlying risk, between the alert the platform generates and the human judgment it needs to support.

The plant manager asking about PLC recovery times is not a management challenge to be handled. He is a security capability to be developed.

In 2026 and beyond, cultivate more of them.

Ready for tool-based OT security assessments? Talk to us now.

احصل على تحديثات أسبوعية

الموارد والأخبار

قد تود أيضًا

BG image

ابدأ الآن

عزز موقفك الأمني لنظام CPS

تواصل مع خبرائنا في أمن CPS للحصول على استشارة مجانية.

BG image

ابدأ الآن

عزز موقفك الأمني لنظام CPS

تواصل مع خبرائنا في أمن CPS للحصول على استشارة مجانية.

BG image

ابدأ الآن

عزز موقفك الأمني لنظام CPS

تواصل مع خبرائنا في أمن CPS للحصول على استشارة مجانية.