Ransomware

Rescue Your Application When It Matters Most

Author

Naj Husain

Date Published

Rescue Your Application | Types of cyberattacks

Dr. Srinidhi Varadarajan, Chief Scientist, Elastio Software

Since the advent of CryptoLocker in 2013, ransomware has become a major drain on businesses, affecting their survival. With 56% of businesses hit in the last 12 months and 42% of them losing data, early detection and prevention of lateral spread represents one of the most potent defenses against ransomware.

Ransomware enters an infrastructure as part of a payload through several entry vectors. One common route in the cloud is due to security misconfigurations and unpatched software vulnerabilities in the application chain, which allows an attacker to drop a payload on the victim system. Another is via the compromise of security tokens from IAM roles. More targeted attacks combine social engineering with spear phishing against identified targets – the recent MGM and Caesar attacks. The last few years have also seen the rise of human teams coordinating attacks on a single target to get as many lateral systems as possible. The problem is prevalent enough that it is a matter of when, not if. The average payout in the last twelve months was $2M per incident.

Average Ransom Demand By Industry

Ransomware is typically the last part of a larger malware payload, which includes backdoors and a communication path to command and control servers. The larger malware payload dwells for a period of time, during which it spreads laterally within the infrastructure to expand its foothold and leave further backdoors behind. In the last stage – the attack phase – a ransomware package is detonated, encrypting user data across multiple systems.

Dwell times have fallen considerably over the last year from over 72 days to 5 days for aggressive strains, which reflects the confluence of two factors. First, better cyber defenses have increased detection risk. Second, targeted attack teams are becoming more common, creating a larger initial beachhead and thus reducing the need to dwell longer than necessary. The loss of five days of data is an extinction-level event for many businesses.

Backups are commonly used to recover data encrypted during a ransomware attack. However, if the malware payload is not identified and neutered, restoring from backups simply reintroduces the backdoors from infected backups. This leads to persistent infections and a wide open exfiltration pathway within the infrastructure for repeated data ransom demands. A worse outcome occurs when malware has lingered long enough that backup retention policies rolled up old backups, leaving no clean backup that doesn’t have malware.

Clearly, backups alone are not enough – early detection of malware is needed to ensure that backups are clean and healthy. While air-gapped systems provide an additional layer of security, they are only as good as the data entering them – without early detection, malware can slip through to air-gapped backups as well.

Some backup systems deploy an anomaly detection engine to provide early warning of a ransomware attack. These engines look for signals in a snapshot, such as the change rate of the snapshot, to indicate suspicious behavior. More sophisticated engines look inside a backup to measure the entropy (degree of randomness) of files. Encrypted data is very close to the maximum measure of entropy and, intuitively, should stand out. It doesn’t – Microsoft PowerPoint files have similar entropy to encrypted files, as do many other common file formats. The issue here is the large rate of false positives that have to be sifted through even to know if there is an attack in progress. Worse yet is alert fatigue and subsequent alert suppression that misses critical patterns. As every security team can attest to, we want actionable intelligence, not another alert.

And this is just the tip of the iceberg – even sophisticated anomaly detection techniques are commonly defeated. Malware such as LockFile, Rook, BianLian encrypt subsets of data so the overall entropy remains small. Others, such as Xorist, AlphaLocker, Corona, do not change any file metadata, so signals such as the last modified time of a file remain unchanged and thus don’t trigger alerts. TimeTime encrypts files slowly over time to stay below activity threshold detectors. Alcatraz Locker uses simple file encoding, and Clop, Vaca, and several others skip encrypting the file header and thus require deep file inspection to identify their impact. The list of evasion techniques is long and continues to evolve.

Elastio addresses this problem through a two-pronged strategy – early detection in the spread phase to detect malware before it detonates and post attack recovery that quickly identifies affected data assets along with their last known clean copies. For early detection, Elastio’s data integrity engine scans every backup for known ransomware and malware strains with a database that updates continuously. This provides actionable threat intelligence that identifies the exact malware, the set of affected systems and its impact.

The post-attack recovery phase is focused on the inevitable conclusion that, however good, prevention techniques only have to fail once. Some fraction of ransomware will detonate, requiring post-attack recovery to get back to a clean state. Elastio uses an ensemble of behavioral analysis, deep file inspection, and deterministic models from our security lab to detect ransomware. Behavioral analysis identifies malicious files based on usage patterns, characteristics, and known indicators of malware. It uses a statistical model to group patterns of behavior in a high-dimensional space to determine if they are ransomware. This approach relies on the fact that all ransomware exhibits certain common characteristics in higher-order space that can be detected via statistical analysis of complex patterns. The model is very good at capturing large-scale behavior while retaining file-level granularity. The behavioral analysis also includes over-time analysis, where a timeline of backups is analyzed for the stability of their behavior over time at the granularity of each file. This technique is particularly important in de-obfuscating malware that doesn’t change metadata. Behavioral analysis can detect both known and unknown ransomware strains.

Deep file-level inspection performs a more thorough analysis of individual files to detect signs of ransomware. This approach examines the content and structure of files to identify any malicious activity or modifications that indicate ransomware encryption. By inspecting the actual data within files, it provides a higher level of accuracy in detecting ransomware attacks.

Over the last three years, Elastio’s security lab has analyzed over 1900 ransomware families and their variants, representing everything seen publicly since 2014. Some, such as Clop, have variants that are sufficiently evolved to be indistinguishable from their ancestors, including entirely different behaviors. Each malware payload with ransomware was disassembled, neutered from its command and control servers, detonated, and analyzed to produce behavior patterns and indicators of penetration. This enables a comprehensive evaluation of each ransomware threat and represents one of the most potent tools for post-attack application recovery. Over the last six months, Elastio’s security team has been analyzing as yet unreleased malware (in the wild), and the models are now refined enough that they can accurately detect the detonation of the vast majority of hitherto unseen ransomware.

Elastio operates in one of two modes. In scan-only mode, it invokes a snapshot or mounts backup from your existing backup system and scans it for malware and ransomware. It always keeps the last clean copy of backup around to ensure recovery in case of an attack. In the second mode, Elastio takes a snapshot and ingests it into a vault hosted within the customer VPC that is deduplicated, compressed, and encrypted. No data ever leaves a customer account, and for cost savings, the vault is backed by S3. The vault is immutable and uses a WORM model with no delete primitive. In this mode, Elastio additionally provides guaranteed recovery since it has the data and can ensure the provenance of the backup chain.

In both modes, the Elastio pipeline runs through three stages. The first stage involves a health check scan that ensures the base we are starting from has a clean bill of health that can be trusted. As each backup is taken, the malware detection engine scans it for early detection of unexploded malware in its spread phase. This stage produces actionable threat intelligence, including exact threat identification and its behavior. In the post attack stage, Elastio immediately identifies ransomware detonation and, more specifically, identifies the last clean copies of affected data, which can be restored to a sandbox for analysis. Post-attack recovery is complemented by direct support from Elastio’s security lab to unhook malware and rapidly return the infrastructure back to a clean state.

Our goal is simple: recover your applications when you need it most.

About Elastio

Elastio is the leader in Ransomware Recovery Assurance, helping enterprises prove their backups and cloud storage are always safe to recover. Our platform continuously validates backup and cloud storage integrity, detects advanced ransomware encryption that evades perimeter defenses, and guarantees a provable clean recovery point within your SLA. From AWS-native workloads to enterprise backup platforms, Elastio removes attackers’ leverage by making recovery a monitored security control.

Recover With Certainty

See how Elastio validates every backup across clouds and platforms to recover faster, cut downtime by 90%, and achieve 25x ROI.

Related Articles
Elastio Software,  Ransomware
March 12, 2026

KEY STATISTICS <2.5%MOVEit victims who paid ransom~25%Accellion victims who paid (2021)~0%Paid in Cleo & Oracle EBS breaches For a few years, ransomware groups seemed to have found a smarter play: steal data, skip the encryption, and watch the ransom payments roll in. It worked brilliantly — until it didn’t. Now, with extortion-only economics in freefall, threat actors are returning to the double-threat model that made them so feared in the first place. How the Shift Happened The data-exfiltration-only playbook was popularized by Cl0p, a group that turned zero-day exploitation into an assembly line. The formula was elegant in its simplicity: find a critical vulnerability in a widely-used enterprise file transfer or storage product, exploit it at scale before anyone could patch, siphon data from as many victims as possible, and demand silence money. In 2021, this approach paid off spectacularly. During the Accellion campaign, Cl0p breached dozens of organizations and roughly a quarter of them paid up. The group repeated the trick with GoAnywhere MFT, where about one in five victims settled. These weren’t small scores — the group likely cleared tens of millions of dollars without ever deploying a single encryption payload. Other groups took notice. Why bother with the complexity of encryption, the risk of detection during file-locking operations, and the messy negotiation over decryption keys? Just steal the data and threaten to publish it. “The bullet points on the ‘pro’ side of the white board are getting increasingly scarce, while the cons side is getting crowded.”— Coveware, Q4 2025 Ransomware Trends Report When the Money Dried Up The MOVEit campaign — Cl0p’s largest and most audacious operation — was also the beginning of the end for the extortion-only model. The attack hit hundreds of organizations across government, finance, and healthcare. But when the ransom demands came, victims largely refused to pay. Less than 2.5% complied. In the subsequent Cleo and Oracle E-Business Suite campaigns, the rate collapsed further — approaching zero. The reason isn’t hard to understand. Enterprises have grown more sophisticated in assessing what a ransom payment actually buys. When encryption is involved, paying at least restores access to locked systems. But paying to suppress leaked data offers no such guarantee. The attackers retain the data regardless. They might sell it, recycle it in future attacks, or simply fail to honor any agreement — and there’s no enforcement mechanism for victims to lean on. The Shiny Hunters extortion group experienced the same rude awakening, according to Coveware, after attempting to replicate Cl0p’s approach. The math simply stopped working. Most Active Groups in Q4 2025 Akira~14% of activityQilin~13% of activityLone Wolf~12% of activity Who’s Getting Hit Ransomware attacks in Q4 2025 were not evenly distributed. Professional services firms bore the heaviest load at nearly 19% of all attacks. Healthcare came in second at over 15%, a perennial target due to its operational urgency and often strained security budgets. Technology, software, and consumer services rounded out the most targeted sectors. SECTORSHARE OF ATTACKS%Professional Services■■■■■■■■■18.92%Healthcare■■■■■■■■15.32%Consumer Services■■■■■9.01%Technology Hardware■■■■■9.91%Software Services■■■■7.21% What the Pivot Back Means for Defenders The return to encryption-plus-exfiltration attacks is, in a sense, good news: organizations now have more warning indicators to look for. Encrypting files across a network is a noisy operation. Good endpoint detection and response (EDR) solutions, behavioral analytics, and network monitoring give defenders a fighting chance to catch attackers mid-operation. But the combined threat model is also more consequential when it succeeds. Organizations must now contend simultaneously with system outages — creating immediate pressure to pay — and with the ongoing risk that stolen data surfaces on dark web leak sites regardless of whether a ransom is paid. That dual leverage was always ransomware’s most potent weapon, and it’s back. Coveware’s analysis offers a pointed observation: every refused ransom payment chips away at the economics that sustain these operations. Improved prevention, tighter incident response, and the maturity to resist extortion collectively make ransomware less profitable — and less frequent. KEY TAKEAWAYS FOR SECURITY TEAMS Extortion-only attacks are yielding diminishing returns — expect more groups to reintroduce encryption for additional leverage.Paying ransom to suppress data release offers no reliable guarantee; enterprises are right to weigh this carefully.Professional services and healthcare remain the top ransomware targets by volume in Q4 2025.Behavioral detection and EDR are more critical than ever as encryption-based attacks return to prominence.Disciplined incident response — including the decision whether to pay — directly erodes attacker economics across the ecosystem. The takeaway isn’t that ransomware is getting easier to deal with. It’s that the cat-and-mouse dynamic is accelerating. Defenders adapted to double extortion; attackers countered with data-only theft; now they’re reverting as that tactic loses teeth. Understanding this cycle — and staying a step ahead — is the work of modern security operations. Adapted from SecurityWeek / Coveware Q4 2025 Ransomware Trends Report — March 2026

Elastio Software,  Ransomware
February 16, 2026

Cloud ransomware incidents rarely begin with visible disruption. More often, they unfold quietly, long before an alert is triggered or a system fails. By the time incident response teams are engaged, organizations have usually already taken decisive action. Workloads are isolated. Instances are terminated. Cloud dashboards show unusual activity. Executives, legal counsel, and communications teams are already involved. And very quickly, one question dominates every discussion. What can we restore that we actually trust? That question exposes a critical gap in many cloud-native resilience strategies. Most organizations have backups. Many have immutable storage, cross-region replication, and locked vaults. These controls are aligned with cloud provider best practices and availability frameworks. Yet during ransomware recovery, those same organizations often cannot confidently determine which recovery point is clean. Cloud doesn’t remove ransomware risk — it relocates it This is not a failure of effort. It is a consequence of how cloud architectures shift risk. Cloud-native environments have dramatically improved the security posture of compute. Infrastructure is ephemeral. Servers are no longer repaired; they are replaced. Containers and instances are designed to be disposable. From a defensive standpoint, this reduces persistence at the infrastructure layer and limits traditional malware dwell time. However, cloud migration does not remove ransomware risk. It relocates it. Persistent storage remains long-lived, highly automated, and deeply trusted. Object stores, block snapshots, backups, and replicas are designed to survive everything else. Modern ransomware campaigns increasingly target this persistence layer, not the compute that accesses it. Attackers don’t need malware — they need credentials Industry investigations consistently support this pattern. Mandiant, Verizon DBIR, and other threat intelligence sources report that credential compromise and identity abuse are now among the most common initial access vectors in cloud incidents. Once attackers obtain valid credentials, they can operate entirely through native cloud APIs, often without deploying custom malware or triggering endpoint-based detections. From an operational standpoint, these actions appear legitimate. Data is written, versions are created, snapshots are taken, and replication occurs as designed. The cloud platform faithfully records and preserves state, regardless of whether that state is healthy or compromised. This is where many organizations encounter an uncomfortable reality during incident response. Immutability is not integrity Immutability ensures that data cannot be deleted or altered after it is written. It does not validate whether the data was already encrypted, corrupted, or poisoned at the time it was captured. Cloud-native durability and availability controls were never designed to answer the question incident responders care about most: whether stored data can be trusted for recovery. In ransomware cases, incident response teams repeatedly observe the same failure mode. Attackers encrypt or corrupt production data, often gradually, using authorized access. Automated backup systems snapshot that corrupted state. Replication propagates it to secondary regions. Vault locks seal it permanently. The organization has not lost its backups. It has preserved the compromised data exactly as designed. Backup isolation alone is not enough This dynamic is particularly dangerous in cloud environments because it can occur without malware, without infrastructure compromise, and without violating immutability controls. CISA and NIST have both explicitly warned that backup isolation and retention alone are insufficient if integrity is not verified. Availability testing does not guarantee recoverability. Replication can accelerate the blast radius Replication further amplifies the impact. Cross-region architectures prioritize recovery point objectives and automation speed. When data changes in a primary region, those changes are immediately propagated to disaster recovery environments. If the change is ransomware-induced corruption, replication accelerates the blast radius rather than containing it. From the incident response perspective, this creates a critical bottleneck that is often misunderstood. The hardest part of recovery is deciding what to restore The hardest part of recovery is not rebuilding infrastructure. Cloud platforms make redeployment fast and repeatable. Entire environments can be recreated in hours. The hardest part is deciding what to restore. Without integrity validation, teams are forced into manual forensic processes under extreme pressure. Snapshots are mounted one by one. Logs are reviewed. Timelines are debated. Restore attempts become experiments. Every decision carries risk, and every delay compounds business impact. This is why ransomware recovery frequently takes days or weeks even when backups exist. Boards don’t ask “Do we have backups?” Boards do not ask whether backups are available. They ask which recovery point is the last known clean state. Without objective integrity assurance, that question cannot be answered deterministically. This uncertainty is not incidental. It is central to how modern ransomware creates leverage. Attackers understand that corrupting trust in recovery systems can be as effective as destroying systems outright. What incident response teams wish you had is certainty What incident response teams consistently wish organizations had before an incident is not more backups, but more certainty. The ability to prove, not assume, that recovery data is clean. Evidence that restoration decisions are based on validated integrity rather than best guesses made under pressure. Integrity assurance is the missing control This is where integrity assurance becomes the missing control in many cloud strategies. NIST CSF explicitly calls for verification of backup integrity as part of the Recover function. Yet most cloud-native architectures stop at durability and immutability. When integrity validation is in place, recovery changes fundamentally. Organizations can identify the last known clean recovery point ahead of time. Recovery decisions become faster, safer, and defensible. Executive and regulatory confidence improves because actions are supported by evidence. From an incident response standpoint, the difference is stark. One scenario is prolonged uncertainty and escalating risk. The other is controlled, confident recovery. Resilience is proving trust, not storing data Cloud-native architecture is powerful, but ransomware has adapted to it. In today’s threat landscape, resilience is no longer defined by whether data exists somewhere in the cloud. It is defined by whether an organization can prove that the data it restores is trustworthy. That is what incident response teams see after cloud ransomware. Not missing backups, but missing certainty. Certainty is the foundation of recovery And in modern cloud environments, certainty is the foundation of recovery.

Ransomware,  provable recovery
February 8, 2026

CMORG’s Data Vaulting Guidance: Integrity Validation Is Now a Core Requirement In January 2025, the Cross Market Operational Resilience Group (CMORG) published Cloud-Hosted Data Vaulting: Good Practice Guidance. It is a timely and important contribution to the operational resilience of the UK financial sector. CMORG deserves recognition for treating recovery architecture as a priority, not a future initiative. In financial services, the consequences of a cyber event extend well beyond a single institution. When critical systems are disrupted and recovery fails, the impact can cascade across customers, counterparties, and markets. The broader issue is confidence. A high-profile failure to recover can create damage that reaches far beyond the affected firm. This is why CMORG’s cross-industry collaboration matters. It reflects an understanding that resilience is a shared responsibility. Important Theme: Integrity Validation The guidance does a strong job outlining the principles of cloud-hosted vaulting, including isolation, immutability, access control, and key management. These are necessary design elements for protecting recovery data against compromise. But a highly significant element of the document is its emphasis on integrity validation as a core requirement. CMORG Foundation Principle #11 states: “The data vault solution must have the ability to run analytics against its objects to check integrity and for any anomalies without executing the object. Integrity checks must be done prior to securing the data, doing it post will not ensure recovery of the original data or the service that the data supported.” This is a critical point. Immutability can prevent changes after data is stored, but it cannot ensure that the data was clean and recoverable at the time it was vaulted. If compromised data is written into an immutable environment, it becomes a permanently protected failure point. Integrity validation must occur before data becomes the organization’s final recovery source of truth. CMORG Directly Addresses the Risk of Vaulting Corrupted Data CMORG reinforces this reality in Annex A, Use Case #2, which addresses data corruption events: “For this use case when data is ‘damaged’ or has been manipulated having the data vaulted would not help, since the vaulted data would have backed up the ‘damaged’ data. This is where one would need error detection and data integrity checks either via the application or via the backup product.” This is one of the most important observations in the document. Vaulting can provide secure retention and isolation, but it cannot determine whether the data entering the vault is trustworthy. Without integrity controls, vaulting can unintentionally preserve compromised recovery points. The Threat Model Has Changed The guidance aligns with what many organizations are experiencing in practice. Cyber-attacks are no longer limited to fast encryption events. Attackers increasingly focus on compromising recovery, degrading integrity over time, and targeting backups and recovery infrastructure. These attacks may involve selective encryption, gradual corruption, manipulation of critical datasets, or compromise of backup management systems prior to detonation. In many cases, the goal is to eliminate confidence in restoration and increase leverage during extortion. The longer these attacks go undetected, the more likely compromised data is replicated across snapshots, backups, vaults, and long-term retention copies. At that point, recovery becomes uncertain and time-consuming, even if recovery infrastructure remains available. Why Integrity Scanning Must Happen Before Data Is Secured CMORG’s point about validating integrity before data is secured is particularly important. Detection timing directly affects recovery outcomes. Early detection preserves clean recovery points and reduces the scope of failed recovery points. Late detection increases the likelihood that all available recovery copies contain the same corruption or compromise. This is why Elastio’s approach is focused on integrity validation of data before it becomes the foundation of recovery. Organizations need a way to identify ransomware encryption patterns and corruption within data early for recovery to be predictable and defensible. A Meaningful Step Forward for the Industry CMORG’s cloud-hosted data vaulting guidance represents an important milestone. It reflects a mature view of resilience that recognizes vaulting and immutability as foundational, but incomplete without integrity validation. The integrity of data must be treated as a primary control. CMORG is correct to call this out. It is one of the clearest statements published by an industry body on what effective cyber vaulting must include to support real recovery.