Is Your Business Prepared for a Cyber Attack?
Author
Zeen Rachidi
Date Published

That was the central question guiding our recent executive roundtable, co-hosted by Sheltered Harbor, AWS, NetApp, and Elastio in New York City.
The conversation brought together senior leaders in the financial services industry to explore what it truly takes to prepare for a ransomware event that could jeopardize data, disrupt operations, or erode customer trust. While the event was focused on financial institutions, the insights shared are relevant to any organization that views recovery as a strategic risk area.
Here are three key takeaways we hope all resilience leaders will carry forward.
- Executive Buy-In Is Foundational
Cyber resilience is not just an IT issue. It is a board-level concern that requires alignment across leadership, not only on tooling but on priorities. Everyone around the table agreed that programs stall without clear ownership, measurable objectives, and regular testing.
Executives set the tone. They define what "good" looks like and ensure it is resourced and reviewed. Recovery has to be treated as a business-critical capability, not an afterthought when something goes wrong.
Helpful resource:
Sheltered Harbor Maturity Model for Recovery
Use this to benchmark your current state, identify gaps, and clearly communicate next steps to stakeholders.
- The "Three I’s" Are the New Standard for Ransomware-Ready Data Protection
A recurring theme throughout the discussion was the growing adoption of the "Three I’s" framework: Immutability, Isolation, and Integrity.
- Immutability keeps backup data from being modified or deleted.
- Isolation ensures attackers cannot reach recovery data.
- Integrity validates that data is clean and restorable.
All three are essential. Without them, attackers retain leverage and recovery remains a gamble. As one participant put it, "Immutability without integrity is just a locked box filled with poisoned data."
Helpful resources:Cyber Vaults: How Regulated Sectors Fight Cyberattacks • Disaster Recovery Journal
Blog on the core pillars of effective cyber vaulting
Building a Sheltered Harbor compliant data vault on AWS | AWS for Industries
How AWS infrastructure can support immutability, isolation, and integrity
- Data Integrity Scanning Is Now a Core Security Control
It’s no longer enough to wait for a recovery event to find out if your data is usable. That moment is too late. Continuous integrity scanning of both production and backup data is becoming a best practice across regulated sectors. Why? Because ransomware actors are now employing tactics to bypass existing tools and remain undetected, compromising recovery long before alarms go off.
Expert-led scanning enables organizations to identify compromised recovery points and maintain a reliable inventory of clean data, ready when needed. Without it, organizations are flying blind.
Helpful resources:Ensuring Clean Recovery Points in a World of Sophisticated & Evolving Ransomware
Why expert scans on backup data are necessary to continuously prove recoverability
ONTAP Autonomous Ransomware Protection (NetApp)
Behavior-based detection of ransomware in production data
Want to Go Deeper?
If you missed the roundtable but would like to continue the conversation, we’re happy to connect with you one-on-one.
Let’s ensure your organization is prepared to recover—before an attack puts it to the test.Contact – Elastio SoftwareContact – Elastio Software
About the Hosts
This event brought together experts across cloud, data infrastructure, and cyber recovery:
- Sheltered Harbor: The financial sector’s nonprofit standard-bearer for recovery readiness
- Elastio: The ransomware recovery assurance platform validating backup and recovery data integrity
- AWS: The cloud backbone supporting secure, scalable cyber resilience architectures
- NetApp: The intelligent data infrastructure provider with built-in ransomware protection
Recover With Certainty
See how Elastio validates every backup across clouds and platforms to recover faster, cut downtime by 90%, and achieve 25x ROI.
Related Articles

Cloud ransomware incidents rarely begin with visible disruption. More often, they unfold quietly, long before an alert is triggered or a system fails. By the time incident response teams are engaged, organizations have usually already taken decisive action. Workloads are isolated. Instances are terminated. Cloud dashboards show unusual activity. Executives, legal counsel, and communications teams are already involved. And very quickly, one question dominates every discussion. What can we restore that we actually trust? That question exposes a critical gap in many cloud-native resilience strategies. Most organizations have backups. Many have immutable storage, cross-region replication, and locked vaults. These controls are aligned with cloud provider best practices and availability frameworks. Yet during ransomware recovery, those same organizations often cannot confidently determine which recovery point is clean. Cloud doesn’t remove ransomware risk — it relocates it This is not a failure of effort. It is a consequence of how cloud architectures shift risk. Cloud-native environments have dramatically improved the security posture of compute. Infrastructure is ephemeral. Servers are no longer repaired; they are replaced. Containers and instances are designed to be disposable. From a defensive standpoint, this reduces persistence at the infrastructure layer and limits traditional malware dwell time. However, cloud migration does not remove ransomware risk. It relocates it. Persistent storage remains long-lived, highly automated, and deeply trusted. Object stores, block snapshots, backups, and replicas are designed to survive everything else. Modern ransomware campaigns increasingly target this persistence layer, not the compute that accesses it. Attackers don’t need malware — they need credentials Industry investigations consistently support this pattern. Mandiant, Verizon DBIR, and other threat intelligence sources report that credential compromise and identity abuse are now among the most common initial access vectors in cloud incidents. Once attackers obtain valid credentials, they can operate entirely through native cloud APIs, often without deploying custom malware or triggering endpoint-based detections. From an operational standpoint, these actions appear legitimate. Data is written, versions are created, snapshots are taken, and replication occurs as designed. The cloud platform faithfully records and preserves state, regardless of whether that state is healthy or compromised. This is where many organizations encounter an uncomfortable reality during incident response. Immutability is not integrity Immutability ensures that data cannot be deleted or altered after it is written. It does not validate whether the data was already encrypted, corrupted, or poisoned at the time it was captured. Cloud-native durability and availability controls were never designed to answer the question incident responders care about most: whether stored data can be trusted for recovery. In ransomware cases, incident response teams repeatedly observe the same failure mode. Attackers encrypt or corrupt production data, often gradually, using authorized access. Automated backup systems snapshot that corrupted state. Replication propagates it to secondary regions. Vault locks seal it permanently. The organization has not lost its backups. It has preserved the compromised data exactly as designed. Backup isolation alone is not enough This dynamic is particularly dangerous in cloud environments because it can occur without malware, without infrastructure compromise, and without violating immutability controls. CISA and NIST have both explicitly warned that backup isolation and retention alone are insufficient if integrity is not verified. Availability testing does not guarantee recoverability. Replication can accelerate the blast radius Replication further amplifies the impact. Cross-region architectures prioritize recovery point objectives and automation speed. When data changes in a primary region, those changes are immediately propagated to disaster recovery environments. If the change is ransomware-induced corruption, replication accelerates the blast radius rather than containing it. From the incident response perspective, this creates a critical bottleneck that is often misunderstood. The hardest part of recovery is deciding what to restore The hardest part of recovery is not rebuilding infrastructure. Cloud platforms make redeployment fast and repeatable. Entire environments can be recreated in hours. The hardest part is deciding what to restore. Without integrity validation, teams are forced into manual forensic processes under extreme pressure. Snapshots are mounted one by one. Logs are reviewed. Timelines are debated. Restore attempts become experiments. Every decision carries risk, and every delay compounds business impact. This is why ransomware recovery frequently takes days or weeks even when backups exist. Boards don’t ask “Do we have backups?” Boards do not ask whether backups are available. They ask which recovery point is the last known clean state. Without objective integrity assurance, that question cannot be answered deterministically. This uncertainty is not incidental. It is central to how modern ransomware creates leverage. Attackers understand that corrupting trust in recovery systems can be as effective as destroying systems outright. What incident response teams wish you had is certainty What incident response teams consistently wish organizations had before an incident is not more backups, but more certainty. The ability to prove, not assume, that recovery data is clean. Evidence that restoration decisions are based on validated integrity rather than best guesses made under pressure. Integrity assurance is the missing control This is where integrity assurance becomes the missing control in many cloud strategies. NIST CSF explicitly calls for verification of backup integrity as part of the Recover function. Yet most cloud-native architectures stop at durability and immutability. When integrity validation is in place, recovery changes fundamentally. Organizations can identify the last known clean recovery point ahead of time. Recovery decisions become faster, safer, and defensible. Executive and regulatory confidence improves because actions are supported by evidence. From an incident response standpoint, the difference is stark. One scenario is prolonged uncertainty and escalating risk. The other is controlled, confident recovery. Resilience is proving trust, not storing data Cloud-native architecture is powerful, but ransomware has adapted to it. In today’s threat landscape, resilience is no longer defined by whether data exists somewhere in the cloud. It is defined by whether an organization can prove that the data it restores is trustworthy. That is what incident response teams see after cloud ransomware. Not missing backups, but missing certainty. Certainty is the foundation of recovery And in modern cloud environments, certainty is the foundation of recovery.

CMORG’s Data Vaulting Guidance: Integrity Validation Is Now a Core Requirement In January 2025, the Cross Market Operational Resilience Group (CMORG) published Cloud-Hosted Data Vaulting: Good Practice Guidance. It is a timely and important contribution to the operational resilience of the UK financial sector. CMORG deserves recognition for treating recovery architecture as a priority, not a future initiative. In financial services, the consequences of a cyber event extend well beyond a single institution. When critical systems are disrupted and recovery fails, the impact can cascade across customers, counterparties, and markets. The broader issue is confidence. A high-profile failure to recover can create damage that reaches far beyond the affected firm. This is why CMORG’s cross-industry collaboration matters. It reflects an understanding that resilience is a shared responsibility. Important Theme: Integrity Validation The guidance does a strong job outlining the principles of cloud-hosted vaulting, including isolation, immutability, access control, and key management. These are necessary design elements for protecting recovery data against compromise. But a highly significant element of the document is its emphasis on integrity validation as a core requirement. CMORG Foundation Principle #11 states: “The data vault solution must have the ability to run analytics against its objects to check integrity and for any anomalies without executing the object. Integrity checks must be done prior to securing the data, doing it post will not ensure recovery of the original data or the service that the data supported.” This is a critical point. Immutability can prevent changes after data is stored, but it cannot ensure that the data was clean and recoverable at the time it was vaulted. If compromised data is written into an immutable environment, it becomes a permanently protected failure point. Integrity validation must occur before data becomes the organization’s final recovery source of truth. CMORG Directly Addresses the Risk of Vaulting Corrupted Data CMORG reinforces this reality in Annex A, Use Case #2, which addresses data corruption events: “For this use case when data is ‘damaged’ or has been manipulated having the data vaulted would not help, since the vaulted data would have backed up the ‘damaged’ data. This is where one would need error detection and data integrity checks either via the application or via the backup product.” This is one of the most important observations in the document. Vaulting can provide secure retention and isolation, but it cannot determine whether the data entering the vault is trustworthy. Without integrity controls, vaulting can unintentionally preserve compromised recovery points. The Threat Model Has Changed The guidance aligns with what many organizations are experiencing in practice. Cyber-attacks are no longer limited to fast encryption events. Attackers increasingly focus on compromising recovery, degrading integrity over time, and targeting backups and recovery infrastructure. These attacks may involve selective encryption, gradual corruption, manipulation of critical datasets, or compromise of backup management systems prior to detonation. In many cases, the goal is to eliminate confidence in restoration and increase leverage during extortion. The longer these attacks go undetected, the more likely compromised data is replicated across snapshots, backups, vaults, and long-term retention copies. At that point, recovery becomes uncertain and time-consuming, even if recovery infrastructure remains available. Why Integrity Scanning Must Happen Before Data Is Secured CMORG’s point about validating integrity before data is secured is particularly important. Detection timing directly affects recovery outcomes. Early detection preserves clean recovery points and reduces the scope of failed recovery points. Late detection increases the likelihood that all available recovery copies contain the same corruption or compromise. This is why Elastio’s approach is focused on integrity validation of data before it becomes the foundation of recovery. Organizations need a way to identify ransomware encryption patterns and corruption within data early for recovery to be predictable and defensible. A Meaningful Step Forward for the Industry CMORG’s cloud-hosted data vaulting guidance represents an important milestone. It reflects a mature view of resilience that recognizes vaulting and immutability as foundational, but incomplete without integrity validation. The integrity of data must be treated as a primary control. CMORG is correct to call this out. It is one of the clearest statements published by an industry body on what effective cyber vaulting must include to support real recovery.

Closing the Data Integrity Control Gap In 2025, the cybersecurity narrative shifted from protection to provable resilience. The reason? A staggering 333% surge in "Hunter-Killer" malware threats designed not just to evade your security stack, but to systematically dismantle it. For CISOs and CTOs in regulated industries, this isn't just a technical hurdle; it is a Material Risk that traditional recovery frameworks are failing to address. The Hunter-Killer Era: Blinding the Frontline The Picus Red Report 2024 identified that one out of every four malware samples now includes "Hunter-Killer" functionality. These tools, like EDRKillShifter, target the kernel-level "callbacks" that EDR and Antivirus rely on to monitor your environment. The Result: Your dashboard shows a "Green" status, while the adversary is silently corrupting your production data. This creates a Recovery Blind Spot that traditional, agent-based controls cannot see. The Material Impact: Unquantifiable Downtime When your primary defense is blinded, the "dwell time", the period an attacker sits in your network, balloons to a median of 11–26 days. In a regulated environment, this dwell time is a liability engine: The Poisoned Backup: Ransomware dwells long enough to be replicated into your "immutable" vaults.The Forensic Gridlock: Organizations spend an average of 24 days in downtime manually hunting for a "clean" recovery point.The Disclosure Clock: Under current SEC mandates, you have four days to determine the materiality of an incident. If you can’t prove your data integrity, you can’t accurately disclose your risk. Agentless Sovereignty: The Missing Control Elastio addresses the Data Integrity Gap by sitting outside the line of fire. By moving the validation layer from the compromised OS to the storage layer, we provide the only independent source of truth. The Control GapThe Elastio OutcomeAgent FragilityAgentless Sovereignty: Sitting out-of-band, Elastio is invisible to kernel-level "Hunter-Killer" malware.Trust BlindnessIndependent Truth: We validate data integrity directly from storage, ensuring recovery points are clean before you restore.Forensic LagMean Time to Clean Recovery (MTCR): Pinpoint the exact second of integrity loss to slash downtime from weeks to minutes. References & Sources GuidePoint Security GRIT 2026 Report: 58% year-over-year increase in ransomware victims.Picus Security Red Report 2024: 333% surge in Hunter-Killer malware targeting defensive systems.ESET Research - EDRKillShifter Analysis: Technical deep-dive into RansomHub’s custom EDR killer and BYOVD tactics.Mandiant M-Trends 2025: Median dwell time increases to 11 days; 57% of breaches notified by external sources.Pure Storage/Halcyon/RansomwareHelp: Average ransomware downtime recorded at 24 days across multiple industries in 2025.Cybereason True Cost to Business: 80% of organizations who pay a ransom are hit a second time.