Elastio Software,  Ransomware

Unmasking the Invisible: Defeating EDR-Evasive Attacks

Date Published

Unmasking

Hunting and Defeating EDR-Evading Threats and Machine-Identity Attacks

As enterprises accelerate cloud transformation, containerization, AI adoption, microservices, and automation, a subtle yet profound shift is reshaping the cyber threat landscape. Traditional endpoint-based detection approaches are no longer sufficient. Attackers are increasingly evading EDR, while simultaneously exploiting a rapidly expanding universe of machine identities such as service accounts, certificates, API keys, and ephemeral workload tokens. This creates a new, invisible attack surface that is often unmonitored, ungoverned, and misunderstood.

To defend effectively, organizations must evolve. The new model brings together endpoint awareness, identity intelligence, and data-layer resilience to expose threats that would otherwise remain invisible.

The EDR Blind Spot Is Widening

Endpoint Detection and Response has been the backbone of enterprise defense. But adversaries have learned to systematically bypass it through techniques that interfere with telemetry, suppress alerts, operate from memory, or shift their activity into systems or layers where EDR agents cannot run. Some threat groups have deployed tooling that disables endpoint monitoring components entirely, allowing operations to continue with little or no visibility for defenders.

At the same time, many critical infrastructure components do not support EDR at all. Hypervisors, storage appliances, virtual machine management systems, and specialized cloud services often sit outside traditional endpoint protections. Attackers increasingly target these layers because activity there blends in with normal operations and rarely triggers alarms.

As a result, relying solely on endpoint-centric detection creates blind spots that grow wider as modern infrastructure becomes more distributed.

The Explosion of Machine Identities and the Risks They Introduce

While EDR evasion grows more sophisticated, another trend has emerged in parallel: the exponential rise of machine identities. These are non-human actors created by automation pipelines, containers, microservices, serverless functions, AI agents, DevOps tooling, and cloud services.

Machine identities now outnumber human identities in most cloud-forward enterprises by enormous margins. They often carry privileged permissions, access sensitive data paths, or control critical infrastructure functions.

Unlike human accounts, these identities rarely follow standardized onboarding, governance, audit, or lifecycle processes. Many are short-lived, created and destroyed automatically, leaving gaps in visibility. Others live far longer than intended because no one realizes they still exist.

Attackers increasingly target these identities because compromising one can grant immediate and legitimate access to high-value systems or data. The activity of a hijacked machine identity blends in naturally with expected automation patterns, making detection difficult. In many cases, the identity itself becomes the persistence mechanism.

Identity Becomes the New Perimeter

These dynamics undermine a core assumption behind many security architectures: that identity governance is equivalent to human access control. In cloud-native enterprises, identity is now as much about workloads as it is about people. When machine identities are not continuously monitored, governed, and validated, they become powerful tools for stealthy lateral movement or data manipulation.

This means identity has truly become the perimeter. But it is a perimeter that cannot be secured solely with human-centric tools.

The Data Layer Is Where Invisible Threats Finally Become Visible

Machine identities interact with data continuously. They create snapshots, move objects across storage tiers, generate logs, trigger analytics pipelines, replicate datasets, and run unattended processes. If one of these identities is compromised, the first signs of malicious activity often appear in the data layer itself.

Unauthorized reads, unexpected modifications, corruption of snapshots, tampered metadata, irregular replication events, or the introduction of malicious content are often the earliest and most reliable indicators of attack. By the time endpoint or identity systems raise alerts, the attacker may have already altered data across multiple systems.

This is why modern cyber resilience depends on the ability to continuously verify the integrity, security, and recoverability of data itself.

A Modern Defense Model

Addressing these emerging threats requires a multi-layered approach that blends identity, workload, and data-centric controls.

  1. First, all machine identities must be governed with the same rigor as human identities. This means complete inventory, lifecycle management, least-privilege enforcement, short-lived credential use, and continuous monitoring of identity behavior.
  2. Second, detection must expand beyond endpoints. Organizations need visibility into identity issuance, API usage, workload behavior, cloud control-plane activity, and infrastructure components that do not support traditional EDR.
  3. Third, data integrity must be continuously validated. Snapshots, backups, object data, and replicated datasets must be automatically and regularly inspected. Any unauthorized change or anomaly should be treated as a leading indicator of potential compromise.
  4. Fourth, Zero Trust principles must be deeply embedded in the machine and data layers. Verification is no longer only about authenticating a user. It is about verifying the legitimacy of every process, every identity, and every piece of data flowing through the enterprise.

Why This Approach Is Strategic

Adversaries are adapting quickly. They no longer need to compromise a human identity or bypass every endpoint. They can operate quietly within automation systems, exploit permissions given to machine identities, or target data itself as the first point of manipulation.

By addressing machine identity governance and data integrity together, organizations reduce the inherent weaknesses of endpoint-only detection. They gain a defensive architecture that detects threats earlier, responds more effectively, and ensures business continuity even under active attack.

The combination of EDR evasion and machine-identity exploitation represents one of the most significant emerging risks to modern enterprises. Attackers are learning to operate invisibly, bypassing traditional controls and embedding themselves in the automation and data layers where detection is weakest.

To win in this environment, security teams must shift their mindset. They must unmask the invisible by looking where attackers now hide: in identities, in the control plane, and in the data itself. They must verify continuously, trust nothing implicitly, and safeguard the integrity of the information the business depends on.

This is how modern organizations stay resilient. It is how they transform uncertainty into strength. And it is how they defeat adversaries who no longer need to be seen to be dangerous.

This is the gap Elastio is built to close. Schedule a review.

3 Key Takeaways

  1. EDR alone leaves growing visibility gaps
  2. Machine identities are the new attack surface
  3. Data integrity becomes the ultimate detection layer

Recover With Certainty

See how Elastio validates every backup across clouds and platforms to recover faster, cut downtime by 90%, and achieve 25x ROI.

Related Articles
Elastio Software,  Ransomware
March 12, 2026

KEY STATISTICS <2.5%MOVEit victims who paid ransom~25%Accellion victims who paid (2021)~0%Paid in Cleo & Oracle EBS breaches For a few years, ransomware groups seemed to have found a smarter play: steal data, skip the encryption, and watch the ransom payments roll in. It worked brilliantly — until it didn’t. Now, with extortion-only economics in freefall, threat actors are returning to the double-threat model that made them so feared in the first place. How the Shift Happened The data-exfiltration-only playbook was popularized by Cl0p, a group that turned zero-day exploitation into an assembly line. The formula was elegant in its simplicity: find a critical vulnerability in a widely-used enterprise file transfer or storage product, exploit it at scale before anyone could patch, siphon data from as many victims as possible, and demand silence money. In 2021, this approach paid off spectacularly. During the Accellion campaign, Cl0p breached dozens of organizations and roughly a quarter of them paid up. The group repeated the trick with GoAnywhere MFT, where about one in five victims settled. These weren’t small scores — the group likely cleared tens of millions of dollars without ever deploying a single encryption payload. Other groups took notice. Why bother with the complexity of encryption, the risk of detection during file-locking operations, and the messy negotiation over decryption keys? Just steal the data and threaten to publish it. “The bullet points on the ‘pro’ side of the white board are getting increasingly scarce, while the cons side is getting crowded.”— Coveware, Q4 2025 Ransomware Trends Report When the Money Dried Up The MOVEit campaign — Cl0p’s largest and most audacious operation — was also the beginning of the end for the extortion-only model. The attack hit hundreds of organizations across government, finance, and healthcare. But when the ransom demands came, victims largely refused to pay. Less than 2.5% complied. In the subsequent Cleo and Oracle E-Business Suite campaigns, the rate collapsed further — approaching zero. The reason isn’t hard to understand. Enterprises have grown more sophisticated in assessing what a ransom payment actually buys. When encryption is involved, paying at least restores access to locked systems. But paying to suppress leaked data offers no such guarantee. The attackers retain the data regardless. They might sell it, recycle it in future attacks, or simply fail to honor any agreement — and there’s no enforcement mechanism for victims to lean on. The Shiny Hunters extortion group experienced the same rude awakening, according to Coveware, after attempting to replicate Cl0p’s approach. The math simply stopped working. Most Active Groups in Q4 2025 Akira~14% of activityQilin~13% of activityLone Wolf~12% of activity Who’s Getting Hit Ransomware attacks in Q4 2025 were not evenly distributed. Professional services firms bore the heaviest load at nearly 19% of all attacks. Healthcare came in second at over 15%, a perennial target due to its operational urgency and often strained security budgets. Technology, software, and consumer services rounded out the most targeted sectors. SECTORSHARE OF ATTACKS%Professional Services■■■■■■■■■18.92%Healthcare■■■■■■■■15.32%Consumer Services■■■■■9.01%Technology Hardware■■■■■9.91%Software Services■■■■7.21% What the Pivot Back Means for Defenders The return to encryption-plus-exfiltration attacks is, in a sense, good news: organizations now have more warning indicators to look for. Encrypting files across a network is a noisy operation. Good endpoint detection and response (EDR) solutions, behavioral analytics, and network monitoring give defenders a fighting chance to catch attackers mid-operation. But the combined threat model is also more consequential when it succeeds. Organizations must now contend simultaneously with system outages — creating immediate pressure to pay — and with the ongoing risk that stolen data surfaces on dark web leak sites regardless of whether a ransom is paid. That dual leverage was always ransomware’s most potent weapon, and it’s back. Coveware’s analysis offers a pointed observation: every refused ransom payment chips away at the economics that sustain these operations. Improved prevention, tighter incident response, and the maturity to resist extortion collectively make ransomware less profitable — and less frequent. KEY TAKEAWAYS FOR SECURITY TEAMS Extortion-only attacks are yielding diminishing returns — expect more groups to reintroduce encryption for additional leverage.Paying ransom to suppress data release offers no reliable guarantee; enterprises are right to weigh this carefully.Professional services and healthcare remain the top ransomware targets by volume in Q4 2025.Behavioral detection and EDR are more critical than ever as encryption-based attacks return to prominence.Disciplined incident response — including the decision whether to pay — directly erodes attacker economics across the ecosystem. The takeaway isn’t that ransomware is getting easier to deal with. It’s that the cat-and-mouse dynamic is accelerating. Defenders adapted to double extortion; attackers countered with data-only theft; now they’re reverting as that tactic loses teeth. Understanding this cycle — and staying a step ahead — is the work of modern security operations. Adapted from SecurityWeek / Coveware Q4 2025 Ransomware Trends Report — March 2026

<img src="featured-image.jpg" alt="Cloud-native architecture ransomware risk and data integrity" />
Elastio Software
March 5, 2026

Why Cyber Risk Spikes During Disasters and How to Build Resilience by Design Disaster recovery planning has traditionally focused on infrastructure. Systems fail, environments go offline, and IT teams restore operations as quickly as possible. But that model no longer reflects the reality organizations face today. In a recent webinar with NetApp and Elastio, Brittney Bell (NetApp), Mike Fiorella (NetApp), and Eswar Nalamuru (Elastio) explored an increasingly common pattern. When organizations experience a disruption, whether it is a natural disaster, infrastructure outage, or operational crisis, cyber risk often increases at the exact same time. Attackers understand that recovery periods create vulnerability. Systems are under pressure, teams are focused on restoration, and normal controls may be temporarily bypassed. The result is that disaster scenarios frequently become cyber incidents as well. This shift is forcing organizations to rethink how resilience is designed. Instead of treating disaster recovery and cybersecurity as separate functions, organizations are beginning to design recovery strategies that assume both types of events may occur simultaneously. When crises collide Brittney Bell described this challenge using the concept of a “polycrisis,” where multiple forms of disruption occur together rather than in isolation. Natural disasters alone can cause widespread operational impact. Infrastructure damage, power outages, and supply chain disruptions can force organizations into emergency recovery mode. But during those same moments, cyber attackers may also exploit the chaos. In fact, research shows that a large percentage of organizations affected by natural disasters also experience cyber attacks at the same time. Examples from recent history illustrate the scale of impact that disasters can have on infrastructure and digital operations: Major hurricanes that disrupted utilities and transportation infrastructure for weeksFlooding events that took critical systems offlineStorms that impacted data centers and shut down major digital services These events demonstrate why resilience cannot be limited to infrastructure recovery. Organizations must also assume that security threats will emerge when systems are already under stress. As Bell emphasized, resilience today is not just an IT concern. It is a business survival strategy. Disaster recovery and cyber recovery are not the same A key theme of the discussion was the difference between traditional disaster recovery and cyber recovery. Eswar Nalamuru explained that many organizations still approach both scenarios using the same framework. In practice, the two require very different assumptions. In a traditional disaster recovery scenario, the failure is usually clear. Systems may be offline or infrastructure may be unavailable, but organizations generally trust their backup data and recovery points. Cyber recovery introduces uncertainty. Security teams may not know whether attackers still have access to the environment, whether backups have been compromised, or which recovery point is actually safe to restore. This changes how recovery must be executed. Traditional disaster recovery prioritizes speed and service restoration. Cyber recovery requires precision. Teams must identify a clean recovery point and ensure that restoring data will not reintroduce the threat. That investigation step is what often slows recovery efforts during ransomware incidents. Without confidence in backup integrity, organizations may spend days or weeks determining which recovery point can be trusted. The three pillars of modern resilience The speakers outlined a simple framework that organizations can use to bridge the gap between disaster recovery and cyber recovery. Effective resilience strategies now require three capabilities working together. Availability Systems and data must remain accessible even during disruption. High availability architectures and geographic redundancy ensure that applications can continue operating if a primary location fails. Isolation and immutability Backup data must be protected from tampering or deletion. Features such as immutable storage and write-once policies help ensure attackers cannot alter or destroy recovery data. Integrity Organizations must be able to verify that their backups are clean and recoverable. Without validation, backups may contain encrypted or corrupted data that will fail during recovery. While many organizations already invest heavily in availability and immutability, integrity validation is often the missing layer. The storage foundation for resilient recovery Mike Fiorella discussed how many organizations are using Amazon FSx for NetApp ONTAP as a foundation for modern recovery strategies. FSx for NetApp ONTAP, often referred to as FSxN, is a managed storage service in AWS that incorporates NetApp’s ONTAP data management platform. Several capabilities make it well suited for resilient architectures. High availability deployments allow data to remain accessible even if a failure occurs within a single availability zone. Snapshot technology enables fast, space efficient point-in-time recovery of data. SnapMirror replication allows organizations to maintain synchronized copies of data in secondary AWS regions, enabling rapid failover if a primary region becomes unavailable. SnapLock adds immutability by allowing organizations to enforce write-once retention policies that prevent modification or deletion of protected data. Together, these capabilities allow organizations to create layered recovery strategies that include local snapshots, cross-region replication, and long-term protected backups. The integrity challenge in ransomware recovery Even with strong storage and backup protections in place, a critical question often remains unanswered during ransomware incidents. Is the data clean? Eswar Nalamuru explained that modern ransomware campaigns increasingly target backup infrastructure. If attackers can encrypt both production systems and backups, they remove the organization’s ability to recover independently. Attack techniques have also become far more sophisticated. Many modern ransomware variants use approaches designed to evade traditional detection tools. Examples include: Fileless attacks that operate entirely in memoryEncryption techniques that modify only portions of filesObfuscation techniques that preserve file metadataPolymorphic malware variants that continuously change signatures These techniques make it difficult for traditional security tools to detect encryption activity before damage occurs. To address this challenge, Elastio focuses on validating the integrity of backup data. Its platform scans stored data to detect ransomware encryption patterns and identify clean recovery points that organizations can safely restore. The goal is simple but critical. When a crisis occurs, recovery teams should know exactly where to recover from. Designing resilience for the real world The central lesson from the webinar is that recovery planning must evolve. Organizations can no longer assume that disasters and cyber attacks occur independently. Real world disruptions often combine both. Building resilient architectures requires integrating infrastructure availability, immutable data protection, and backup integrity validation into a single strategy. When these elements work together, organizations can recover faster and with greater confidence, even under the most challenging conditions. Join us for the “Building for the Breach” workshops To continue the conversation, Elastio, NetApp, and AWS are hosting a series of in-person workshops focused on ransomware resilience and recovery readiness. The Building for the Breach workshops explore how organizations can prepare for ransomware attacks before they occur. Each session includes: An executive discussion on modern cyber resilience strategiesA technical walkthrough of ransomware attack and recovery scenariosHands-on demonstrations of technologies that help validate recovery points and accelerate recovery Upcoming workshops are scheduled in cities including New York, Boston, Chicago, and Toronto. If you are responsible for disaster recovery, cybersecurity, or infrastructure resilience, these sessions provide an opportunity to see how modern recovery strategies work in practice and how organizations can strengthen their readiness for future disruptions. You can learn more about the workshops and upcoming dates through the Elastio events page.

Elastio Software
February 27, 2026

The Rise of Off-Platform Encryption Modern ransomware attacks no longer follow a predictable script. Today’s adversaries are methodical and adaptive. They move laterally, identify valuable data, and increasingly attempt techniques designed to evade traditional detection controls. One scenario highlighted in recent threat reporting involves attackers transferring data from a storage array to an unmanaged host, encrypting it outside the production platform, and then writing the encrypted data back. The Illusion of Evasion On the surface, this appears clever. If encryption happens “off platform,” perhaps it avoids detection mechanisms tied to the storage system itself. Security teams may assume that because the encryption process did not execute within the storage environment, it leaves fewer indicators behind. That assumption does not hold up. Why Location Doesn’t Matter The critical point is that ransomware is not dangerous because of where encryption executes. It is dangerous because of what encryption does to data. When attackers copy files to an unmanaged system, encrypt them externally, and then reintroduce them into the environment, the storage platform may simply register file modifications. Blocks are written, files are updated, and nothing may appear operationally unusual at first glance. Encryption Leaves a Mark But the data itself has fundamentally changed. Elastio does not depend on observing the act of encryption. It does not require visibility into the unmanaged host. It does not rely on detecting specific attacker tools or processes. Instead, Elastio evaluates the integrity and structure of the data itself. When encrypted data is written back into a protected environment, it exhibits clear mathematical characteristics. There is high entropy, loss of expected file structure, destruction of known signatures, and transformation from meaningful structured content into statistically random output. Those changes are measurable and immediately identifiable. In an enterprise cloud environment, when encrypted files are reintroduced after off-platform manipulation, Elastio detects the anomaly as soon as the altered data is analyzed. The system recognizes that the file state no longer matches expected structural norms. Compromised data is flagged right away. Clean recovery points are preserved and confidence in restoration remains intact. Protecting Recovery Before It’s Too Late This matters because backup compromise is now a primary objective of modern ransomware groups. Attackers understand that if they can corrupt recovery data, they dramatically increase pressure to pay. Off-platform encryption is one way they attempt to quietly poison what organizations believe are safe restore points. Elastio prevents that silent corruption from spreading undetected. The architectural advantage is straightforward. Elastio focuses on validating the recoverability and integrity of backup data continuously. It does not chase attacker techniques, which evolve constantly. It analyzes outcomes, which cannot hide. Even if encryption occurs halfway around the world on infrastructure the organization never sees, the reintroduced data cannot disguise its cryptographic fingerprint. The mathematical properties of encryption are universal. They do not depend on vendor, platform, or geography. As soon as that altered data touches protected storage, the signal is present. Attackers may change tools, infrastructure, and tradecraft. They may leverage unmanaged hosts, cloud workloads, or insider access. They may try to fragment, stagger, or throttle their activity to avoid behavioral alarms. None of that changes what encrypted data looks like when examined structurally. Verification Is the Advantage That is why outcome-based detection matters. By analyzing the data itself rather than the surrounding activity, Elastio removes the blind spots attackers attempt to exploit. Off-platform encryption is simply another variation of the same fundamental tactic: render data unusable while attempting to evade detection. When encrypted content re-enters the environment, it is seen immediately for what it is. In cybersecurity, assumptions create risk. Verification creates resilience.