Elastio Software

Elastio & NetApp Webinar Recap: Why Cyber Risk Spikes During Disasters

Author

Cecily Polonsky

Date Published

Why Cyber Risk Spikes During Disasters and How to Build Resilience by Design

Disaster recovery planning has traditionally focused on infrastructure. Systems fail, environments go offline, and IT teams restore operations as quickly as possible.

But that model no longer reflects the reality organizations face today.

In a recent webinar with NetApp and Elastio, Brittney Bell (NetApp), Mike Fiorella (NetApp), and Eswar Nalamuru (Elastio) explored an increasingly common pattern. When organizations experience a disruption, whether it is a natural disaster, infrastructure outage, or operational crisis, cyber risk often increases at the exact same time.

Attackers understand that recovery periods create vulnerability. Systems are under pressure, teams are focused on restoration, and normal controls may be temporarily bypassed. The result is that disaster scenarios frequently become cyber incidents as well.

This shift is forcing organizations to rethink how resilience is designed.

Instead of treating disaster recovery and cybersecurity as separate functions, organizations are beginning to design recovery strategies that assume both types of events may occur simultaneously.

When crises collide

Brittney Bell described this challenge using the concept of a “polycrisis,” where multiple forms of disruption occur together rather than in isolation.

Natural disasters alone can cause widespread operational impact. Infrastructure damage, power outages, and supply chain disruptions can force organizations into emergency recovery mode. But during those same moments, cyber attackers may also exploit the chaos.

In fact, research shows that a large percentage of organizations affected by natural disasters also experience cyber attacks at the same time.

Examples from recent history illustrate the scale of impact that disasters can have on infrastructure and digital operations:

  • Major hurricanes that disrupted utilities and transportation infrastructure for weeks
  • Flooding events that took critical systems offline
  • Storms that impacted data centers and shut down major digital services

These events demonstrate why resilience cannot be limited to infrastructure recovery. Organizations must also assume that security threats will emerge when systems are already under stress.

As Bell emphasized, resilience today is not just an IT concern. It is a business survival strategy.

Disaster recovery and cyber recovery are not the same

A key theme of the discussion was the difference between traditional disaster recovery and cyber recovery.

Eswar Nalamuru explained that many organizations still approach both scenarios using the same framework. In practice, the two require very different assumptions.

In a traditional disaster recovery scenario, the failure is usually clear. Systems may be offline or infrastructure may be unavailable, but organizations generally trust their backup data and recovery points.

Cyber recovery introduces uncertainty.

Security teams may not know whether attackers still have access to the environment, whether backups have been compromised, or which recovery point is actually safe to restore.

This changes how recovery must be executed.

Traditional disaster recovery prioritizes speed and service restoration. Cyber recovery requires precision. Teams must identify a clean recovery point and ensure that restoring data will not reintroduce the threat.

That investigation step is what often slows recovery efforts during ransomware incidents.

Without confidence in backup integrity, organizations may spend days or weeks determining which recovery point can be trusted.

The three pillars of modern resilience

The speakers outlined a simple framework that organizations can use to bridge the gap between disaster recovery and cyber recovery.

Effective resilience strategies now require three capabilities working together.

Availability

Systems and data must remain accessible even during disruption. High availability architectures and geographic redundancy ensure that applications can continue operating if a primary location fails.

Isolation and immutability

Backup data must be protected from tampering or deletion. Features such as immutable storage and write-once policies help ensure attackers cannot alter or destroy recovery data.

Integrity

Organizations must be able to verify that their backups are clean and recoverable. Without validation, backups may contain encrypted or corrupted data that will fail during recovery.

While many organizations already invest heavily in availability and immutability, integrity validation is often the missing layer.

The storage foundation for resilient recovery

Mike Fiorella discussed how many organizations are using Amazon FSx for NetApp ONTAP as a foundation for modern recovery strategies.

FSx for NetApp ONTAP, often referred to as FSxN, is a managed storage service in AWS that incorporates NetApp’s ONTAP data management platform.

Several capabilities make it well suited for resilient architectures.

High availability deployments allow data to remain accessible even if a failure occurs within a single availability zone.

Snapshot technology enables fast, space efficient point-in-time recovery of data.

SnapMirror replication allows organizations to maintain synchronized copies of data in secondary AWS regions, enabling rapid failover if a primary region becomes unavailable.

SnapLock adds immutability by allowing organizations to enforce write-once retention policies that prevent modification or deletion of protected data.

Together, these capabilities allow organizations to create layered recovery strategies that include local snapshots, cross-region replication, and long-term protected backups.

The integrity challenge in ransomware recovery

Even with strong storage and backup protections in place, a critical question often remains unanswered during ransomware incidents.

Is the data clean?

Eswar Nalamuru explained that modern ransomware campaigns increasingly target backup infrastructure. If attackers can encrypt both production systems and backups, they remove the organization’s ability to recover independently.

Attack techniques have also become far more sophisticated. Many modern ransomware variants use approaches designed to evade traditional detection tools.

Examples include:

  • Fileless attacks that operate entirely in memory
  • Encryption techniques that modify only portions of files
  • Obfuscation techniques that preserve file metadata
  • Polymorphic malware variants that continuously change signatures

These techniques make it difficult for traditional security tools to detect encryption activity before damage occurs.

To address this challenge, Elastio focuses on validating the integrity of backup data. Its platform scans stored data to detect ransomware encryption patterns and identify clean recovery points that organizations can safely restore.

The goal is simple but critical. When a crisis occurs, recovery teams should know exactly where to recover from.

Designing resilience for the real world

The central lesson from the webinar is that recovery planning must evolve.

Organizations can no longer assume that disasters and cyber attacks occur independently. Real world disruptions often combine both.

Building resilient architectures requires integrating infrastructure availability, immutable data protection, and backup integrity validation into a single strategy.

When these elements work together, organizations can recover faster and with greater confidence, even under the most challenging conditions.

Join us for the “Building for the Breach” workshops

To continue the conversation, Elastio, NetApp, and AWS are hosting a series of in-person workshops focused on ransomware resilience and recovery readiness.

The Building for the Breach workshops explore how organizations can prepare for ransomware attacks before they occur.

Each session includes:

  • An executive discussion on modern cyber resilience strategies
  • A technical walkthrough of ransomware attack and recovery scenarios
  • Hands-on demonstrations of technologies that help validate recovery points and accelerate recovery

Upcoming workshops are scheduled in cities including New York, Boston, Chicago, and Toronto.

If you are responsible for disaster recovery, cybersecurity, or infrastructure resilience, these sessions provide an opportunity to see how modern recovery strategies work in practice and how organizations can strengthen their readiness for future disruptions.

You can learn more about the workshops and upcoming dates through the Elastio events page.

Recover With Certainty

See how Elastio validates every backup across clouds and platforms to recover faster, cut downtime by 90%, and achieve 25x ROI.

Related Articles
Elastio Software
February 27, 2026

The Rise of Off-Platform Encryption Modern ransomware attacks no longer follow a predictable script. Today’s adversaries are methodical and adaptive. They move laterally, identify valuable data, and increasingly attempt techniques designed to evade traditional detection controls. One scenario highlighted in recent threat reporting involves attackers transferring data from a storage array to an unmanaged host, encrypting it outside the production platform, and then writing the encrypted data back. The Illusion of Evasion On the surface, this appears clever. If encryption happens “off platform,” perhaps it avoids detection mechanisms tied to the storage system itself. Security teams may assume that because the encryption process did not execute within the storage environment, it leaves fewer indicators behind. That assumption does not hold up. Why Location Doesn’t Matter The critical point is that ransomware is not dangerous because of where encryption executes. It is dangerous because of what encryption does to data. When attackers copy files to an unmanaged system, encrypt them externally, and then reintroduce them into the environment, the storage platform may simply register file modifications. Blocks are written, files are updated, and nothing may appear operationally unusual at first glance. Encryption Leaves a Mark But the data itself has fundamentally changed. Elastio does not depend on observing the act of encryption. It does not require visibility into the unmanaged host. It does not rely on detecting specific attacker tools or processes. Instead, Elastio evaluates the integrity and structure of the data itself. When encrypted data is written back into a protected environment, it exhibits clear mathematical characteristics. There is high entropy, loss of expected file structure, destruction of known signatures, and transformation from meaningful structured content into statistically random output. Those changes are measurable and immediately identifiable. In an enterprise cloud environment, when encrypted files are reintroduced after off-platform manipulation, Elastio detects the anomaly as soon as the altered data is analyzed. The system recognizes that the file state no longer matches expected structural norms. Compromised data is flagged right away. Clean recovery points are preserved and confidence in restoration remains intact. Protecting Recovery Before It’s Too Late This matters because backup compromise is now a primary objective of modern ransomware groups. Attackers understand that if they can corrupt recovery data, they dramatically increase pressure to pay. Off-platform encryption is one way they attempt to quietly poison what organizations believe are safe restore points. Elastio prevents that silent corruption from spreading undetected. The architectural advantage is straightforward. Elastio focuses on validating the recoverability and integrity of backup data continuously. It does not chase attacker techniques, which evolve constantly. It analyzes outcomes, which cannot hide. Even if encryption occurs halfway around the world on infrastructure the organization never sees, the reintroduced data cannot disguise its cryptographic fingerprint. The mathematical properties of encryption are universal. They do not depend on vendor, platform, or geography. As soon as that altered data touches protected storage, the signal is present. Attackers may change tools, infrastructure, and tradecraft. They may leverage unmanaged hosts, cloud workloads, or insider access. They may try to fragment, stagger, or throttle their activity to avoid behavioral alarms. None of that changes what encrypted data looks like when examined structurally. Verification Is the Advantage That is why outcome-based detection matters. By analyzing the data itself rather than the surrounding activity, Elastio removes the blind spots attackers attempt to exploit. Off-platform encryption is simply another variation of the same fundamental tactic: render data unusable while attempting to evade detection. When encrypted content re-enters the environment, it is seen immediately for what it is. In cybersecurity, assumptions create risk. Verification creates resilience.

Elastio Software
February 22, 2026

The False Security of Checked BoxesIn the high-stakes world of cyber-recovery, there is a dangerous assumption that "detection" is a binary state, either you have it or you don’t. Most backup vendors have checked the box by offering anomaly and entropy-based monitoring. But as a CISO who has spent over a decade in regulated industries, I’ve learned that a check-box control is often worse than no control at all. It creates a false sense of security while delivering a signal so noisy and inaccurate that it’s practically unusable. The Inaccuracy Problem: Inference Is Not Evidence The core issue with the ransomware detection provided by backup vendors isn’t just where it happens; it’s how it happens. These tools rely on statistical inference rather than data evidence: Anomaly Detection: Monitors for “unusual” behavior, like a sudden spike in changed blocks or a deviation in backup window duration.Entropy Detection: Measures data randomness to infer encryption. In a modern enterprise, data is naturally “noisy.” Compressed database logs, encrypted video files, and standard application updates all register as anomalies or high-entropy events. Because these tools cannot distinguish between a legitimate .zip file and a ransomware-encrypted .docx, they produce a constant stream of false positives. Figure 1: Modern ransomware (red) operates below the statistical noise floor while legitimate enterprise data generates constant false-positive noise. Elastio detects threats through structural content inspection, independent of entropy. For a SOC team, this noise is toxic. When a tool is consistently inaccurate, the human response is predictable: the alerts are muted, tuned down, or ignored. If your “last line of defense” relies on a signal that your team doesn’t trust, you don’t actually have a defense. Beyond the “Big Bang”: The Rise of Evasive Encryption Current anomaly and entropy tools were designed for the "Big Bang" encryption events of years past. As of 2026, threat actors have evolved well beyond this model, with variants including LockFile specifically engineered to stay below the statistical noise floor using intermittent encryption. Intermittent Encryption: Encrypting every other 4KB block so the overall entropy change remains negligible.Low-Entropy Encryption: Using specialized schemes that mimic the statistical signature of benign, compressed data.Selective Corruption: Attacking only file headers or metadata while leaving the bulk of the file statistically “normal.” Against these techniques, a statistical guess is useless. You need a Data Integrity Control that performs deep content inspection to validate the actual structure of the data, not just its randomness. Mapping Integrity to the Resilience Lifecycle A high-fidelity integrity engine, like Elastio, provides the same level of accuracy regardless of where it is deployed. However, for a CISO, the location of that check is a strategic decision based on the Resilience Lifecycle: The Backup Layer: Validating integrity here is non-negotiable. It ensures that when you hit “restore,” you aren’t re-injecting corrupted data into your environment and extending downtime.The Production Layer (VMs, Buckets, Filers): For mission-critical data, waiting for the backup cycle to run is a luxury we can’t afford. Detecting corruption at the source, in your production VMs, S3 buckets, or filers, is about minimizing the blast radius. Data integrity validation serves different purposes depending on where it is applied in the resilience lifecycle. Scanning production data across VMs, filers, and object stores is the most effective way to minimize blast radius and prevent spread, because it detects corruption before it propagates downstream. When production data cannot be scanned due to security boundaries, operational constraints, or tenancy limitations, snapshots and replicas become the practical control point for achieving the same outcome. In this model, snapshot integrity analysis is not additive to production scanning; it is a substitute. Both serve the same objective: early detection and containment before corruption reaches backups or immutable storage. The CISO’s Bottom Line: Proving vs. Guessing Resilience is measured by the speed and certainty of recovery. Anomaly and entropy-based detection fail on both counts: they are too inaccurate to provide certainty and too late to provide speed. True resilience requires moving from statistical inference to data integrity validation. Whether validating backups to prove recoverability or monitoring production data to prevent spread, the objective is the same: replace guessing with proof. In regulated environments, “recovery is safe” is the only defensible statement a CISO can make to the board. The ability to detect these advanced threats early is the difference between being able to ensure fast recovery versus a ransomware event that results in devastating downtime, data loss, and financial impact.

Elastio Software,  Ransomware
February 16, 2026

Cloud ransomware incidents rarely begin with visible disruption. More often, they unfold quietly, long before an alert is triggered or a system fails. By the time incident response teams are engaged, organizations have usually already taken decisive action. Workloads are isolated. Instances are terminated. Cloud dashboards show unusual activity. Executives, legal counsel, and communications teams are already involved. And very quickly, one question dominates every discussion. What can we restore that we actually trust? That question exposes a critical gap in many cloud-native resilience strategies. Most organizations have backups. Many have immutable storage, cross-region replication, and locked vaults. These controls are aligned with cloud provider best practices and availability frameworks. Yet during ransomware recovery, those same organizations often cannot confidently determine which recovery point is clean. Cloud doesn’t remove ransomware risk — it relocates it This is not a failure of effort. It is a consequence of how cloud architectures shift risk. Cloud-native environments have dramatically improved the security posture of compute. Infrastructure is ephemeral. Servers are no longer repaired; they are replaced. Containers and instances are designed to be disposable. From a defensive standpoint, this reduces persistence at the infrastructure layer and limits traditional malware dwell time. However, cloud migration does not remove ransomware risk. It relocates it. Persistent storage remains long-lived, highly automated, and deeply trusted. Object stores, block snapshots, backups, and replicas are designed to survive everything else. Modern ransomware campaigns increasingly target this persistence layer, not the compute that accesses it. Attackers don’t need malware — they need credentials Industry investigations consistently support this pattern. Mandiant, Verizon DBIR, and other threat intelligence sources report that credential compromise and identity abuse are now among the most common initial access vectors in cloud incidents. Once attackers obtain valid credentials, they can operate entirely through native cloud APIs, often without deploying custom malware or triggering endpoint-based detections. From an operational standpoint, these actions appear legitimate. Data is written, versions are created, snapshots are taken, and replication occurs as designed. The cloud platform faithfully records and preserves state, regardless of whether that state is healthy or compromised. This is where many organizations encounter an uncomfortable reality during incident response. Immutability is not integrity Immutability ensures that data cannot be deleted or altered after it is written. It does not validate whether the data was already encrypted, corrupted, or poisoned at the time it was captured. Cloud-native durability and availability controls were never designed to answer the question incident responders care about most: whether stored data can be trusted for recovery. In ransomware cases, incident response teams repeatedly observe the same failure mode. Attackers encrypt or corrupt production data, often gradually, using authorized access. Automated backup systems snapshot that corrupted state. Replication propagates it to secondary regions. Vault locks seal it permanently. The organization has not lost its backups. It has preserved the compromised data exactly as designed. Backup isolation alone is not enough This dynamic is particularly dangerous in cloud environments because it can occur without malware, without infrastructure compromise, and without violating immutability controls. CISA and NIST have both explicitly warned that backup isolation and retention alone are insufficient if integrity is not verified. Availability testing does not guarantee recoverability. Replication can accelerate the blast radius Replication further amplifies the impact. Cross-region architectures prioritize recovery point objectives and automation speed. When data changes in a primary region, those changes are immediately propagated to disaster recovery environments. If the change is ransomware-induced corruption, replication accelerates the blast radius rather than containing it. From the incident response perspective, this creates a critical bottleneck that is often misunderstood. The hardest part of recovery is deciding what to restore The hardest part of recovery is not rebuilding infrastructure. Cloud platforms make redeployment fast and repeatable. Entire environments can be recreated in hours. The hardest part is deciding what to restore. Without integrity validation, teams are forced into manual forensic processes under extreme pressure. Snapshots are mounted one by one. Logs are reviewed. Timelines are debated. Restore attempts become experiments. Every decision carries risk, and every delay compounds business impact. This is why ransomware recovery frequently takes days or weeks even when backups exist. Boards don’t ask “Do we have backups?” Boards do not ask whether backups are available. They ask which recovery point is the last known clean state. Without objective integrity assurance, that question cannot be answered deterministically. This uncertainty is not incidental. It is central to how modern ransomware creates leverage. Attackers understand that corrupting trust in recovery systems can be as effective as destroying systems outright. What incident response teams wish you had is certainty What incident response teams consistently wish organizations had before an incident is not more backups, but more certainty. The ability to prove, not assume, that recovery data is clean. Evidence that restoration decisions are based on validated integrity rather than best guesses made under pressure. Integrity assurance is the missing control This is where integrity assurance becomes the missing control in many cloud strategies. NIST CSF explicitly calls for verification of backup integrity as part of the Recover function. Yet most cloud-native architectures stop at durability and immutability. When integrity validation is in place, recovery changes fundamentally. Organizations can identify the last known clean recovery point ahead of time. Recovery decisions become faster, safer, and defensible. Executive and regulatory confidence improves because actions are supported by evidence. From an incident response standpoint, the difference is stark. One scenario is prolonged uncertainty and escalating risk. The other is controlled, confident recovery. Resilience is proving trust, not storing data Cloud-native architecture is powerful, but ransomware has adapted to it. In today’s threat landscape, resilience is no longer defined by whether data exists somewhere in the cloud. It is defined by whether an organization can prove that the data it restores is trustworthy. That is what incident response teams see after cloud ransomware. Not missing backups, but missing certainty. Certainty is the foundation of recovery And in modern cloud environments, certainty is the foundation of recovery.