Back to Blog

The Cybersecurity Industry Has Created Its Own Monster

HarwoodLabs
** cybersecurityransomwareinsider-threatsinformation-securitycybercrime

The arrest of two cybersecurity professionals for orchestrating BlackCat ransomware attacks isn't just another crime story. It's a wake-up call that the industry has fundamentally misunderstood what it means to train digital weapons experts.

Ryan Goldberg, a former incident response manager at Sygnia, and Kevin Tyler Martin, a ransomware threat negotiator at DigitalMint, didn't stumble into cybercrime. They weaponized their professional training against the very organizations they were supposed to protect. The skills that made them valuable defenders made them exceptionally dangerous attackers.

This isn't an isolated incident or a failure of character screening. It's the inevitable consequence of an industry that has spent decades creating a class of professionals with intimate knowledge of digital weapons while treating them like any other corporate employees.

The Dark Mirror of Expertise

Consider what made Goldberg and Martin particularly dangerous. As an incident response manager, Goldberg understood exactly how companies detect, respond to, and recover from breaches. He knew the blind spots, the common mistakes, the pressure points that make organizations cave to ransom demands.

Martin's role as a ransomware threat negotiator was even more damaging in the wrong hands. He literally specialized in the psychology and mechanics of extortion. He knew how to maximize pressure, when to escalate threats, and how to structure demands for maximum compliance. These weren't skills he learned on the dark web. They were professional competencies his employer paid him to develop.

The cybersecurity industry has created thousands of professionals with similar knowledge sets. We've trained people to think like attackers, understand criminal methodologies, and exploit system weaknesses. Then we act surprised when some of them cross the line.

This is like training bomb disposal experts and being shocked when one of them uses that knowledge to build better bombs. The expertise is inherently dual-use.

The Failure of Traditional Security Models

Most cybersecurity companies treat their employees like typical corporate workers. Background checks, reference calls, periodic reviews. Maybe some additional scrutiny for security clearances. But these measures were designed for traditional white-collar crimes: embezzlement, fraud, conflicts of interest.

They're completely inadequate for professionals who possess what amounts to digital weapons expertise.

A traditional employee might steal money or customer data. A cybersecurity professional can bring down entire industries. The scale of potential damage is orders of magnitude different, yet the security measures remain the same.

Consider the broader implications of what Goldberg and Martin accomplished. Working as BlackCat affiliates, they demanded ransoms between $300,000 and $10 million per victim. They successfully extracted $1.27 million from a single medical device manufacturer. The FBI estimates BlackCat collected over $300 million from more than 1,000 victims.

Two employees leveraged their professional training to cause hundreds of millions in damage across multiple sectors. No traditional insider threat model accounts for this level of destructive potential.

The Perverse Incentives Problem

The cybersecurity industry has created a perfect storm of perverse incentives. We pay people to understand criminal methodologies, then act as if economic incentives don't apply to them.

Martin worked as a ransomware threat negotiator, likely making a decent salary. But he also knew exactly how much money flows through these operations. He saw companies pay millions to resolve attacks. He understood the profit margins. The same expertise that made him valuable as a defender showed him exactly how lucrative the other side could be.

This is fundamentally different from other professions with dual-use knowledge. A doctor might understand poisons, but there's no massive underground economy rewarding medical professionals for poisoning people. A pilot understands aviation vulnerabilities, but there's no billion-dollar industry paying pilots to crash planes.

Cybercrime, however, is a thriving economy actively recruiting people with exactly the skills we train cybersecurity professionals to have.

The Network Effect Makes It Worse

What makes insider threats in cybersecurity particularly dangerous is the network effect. These professionals don't work in isolation. They have contacts throughout the industry, relationships with vendors, knowledge of multiple organizations' security postures.

Goldberg's experience at Sygnia gave him insights not just into one company's defenses, but into the security practices of Sygnia's clients across multiple industries. Martin's role exposed him to the negotiation strategies and payment capabilities of numerous victims.

A single compromised cybersecurity professional can leverage insights from dozens of organizations. This multiplier effect doesn't exist in most other industries.

The Skills Transfer Problem

Traditional cybersecurity training emphasizes technical competence: learning attack methodologies, understanding vulnerabilities, developing exploitation techniques. We tell people to "think like an attacker" without seriously considering what happens when they do.

The Goldberg and Martin case demonstrates how seamlessly defensive knowledge transfers to offensive capabilities. Their training didn't just help them understand attacks; it taught them to execute attacks more effectively than typical criminals.

This creates a fundamental tension in cybersecurity education and employment. The better we train people to defend against attacks, the more capable they become of conducting those same attacks.

Why Current Solutions Won't Work

The industry's response to cases like this typically involves calling for better employee screening, more thorough background checks, enhanced monitoring of privileged users. These solutions miss the point entirely.

You can't screen someone's future intentions. Both Goldberg and Martin were presumably model employees before they turned. Better background checks wouldn't have predicted their eventual participation in ransomware operations.

Enhanced monitoring of privileged users might catch some insider threats, but cybersecurity professionals are exactly the people most capable of evading such monitoring. They understand the detection capabilities, the logging mechanisms, the behavioral analytics that might expose them.

The people most dangerous as insider threats are also the most capable of defeating insider threat detection.

What We Need Instead

The cybersecurity industry needs to acknowledge that it's creating a class of professionals with inherently dangerous capabilities. This requires fundamentally different approaches to employment and risk management.

First, we need economic models that account for the unique risks these professionals pose. The potential damage from a compromised cybersecurity expert isn't comparable to other employee types. Compensation, benefits, and retention strategies should reflect this reality.

Second, we need different organizational structures. Maybe cybersecurity professionals shouldn't have access to client environments across multiple industries. Maybe incident response should involve more compartmentalized teams with limited visibility into overall defensive strategies.

Third, we need industry-wide approaches to monitoring and accountability. Individual companies can't solve this problem alone because the expertise and network effects span organizational boundaries.

The Uncomfortable Truth

The uncomfortable truth is that the cybersecurity industry has spent decades creating exactly the type of people who make the most dangerous cybercriminals. We've systematically trained professionals in attack methodologies, given them access to victim environments, and exposed them to the economics of cybercrime.

Some of them will inevitably be tempted to use that knowledge for personal gain. The Goldberg and Martin case won't be the last time we see cybersecurity professionals turn their training against the organizations they're supposed to protect.

The question isn't whether this will happen again. It's whether the industry will continue treating these professionals like typical employees or finally acknowledge the unique risks they represent.

Until we admit that cybersecurity expertise is inherently dual-use and dangerous, we'll keep creating our own worst enemies.

The ransomware problem isn't just about external threats anymore. It's about the enemy within, trained and equipped by the very industry trying to stop them.

,-

**