Back to Blog

The Fox Guarding the Henhouse: How Cybersecurity's Business Model Creates Its Own Worst Nightmare

HarwoodLabs
e for millions.

When two cybersecurity experts pleaded guilty to conducting BlackCat ransomware attacks against the very companies they were trained to protect, the industry's response was predictable: shock, condemnation, and assurances that these were isolated bad actors. But this misses the deeper, more uncomfortable truth.

The cybersecurity industry has built a business model that systematically creates the conditions for insider threats to thrive. From hiring practices that prioritize attack skills over ethical screening, to compensation structures that reward intimate knowledge of criminal methodologies, to access controls that would make a bank security officer laugh, we've constructed a perfect storm of perverse incentives.

The case of Ryan Goldberg and Kevin Martin isn't an aberration. It's an inevitable outcome of how we've chosen to structure this industry.

The Incident Response Industrial Complex

The details of the Goldberg-Martin case read like a cybersecurity professional's fever dream turned nightmare. Goldberg, a former incident response manager at Sygnia, and Martin, a ransomware negotiator at DigitalMint, used their insider knowledge to conduct the very attacks they were supposedly hired to combat. Between May and November 2023, they targeted multiple U.S. companies, demanding ransoms between $300,000 and $10 million.

The irony is almost poetic: cybersecurity professionals who made their living cleaning up after ransomware attacks decided to cut out the middleman and start conducting the attacks themselves.

But here's what should terrify every CISO reading this: these weren't outsiders who infiltrated the industry. These were consummate insiders who understood exactly how incident response works, what companies pay for recovery, and most critically, how investigations unfold. They had front-row seats to the entire cybersecurity theater.

This wasn't a failure of individual character. This was a structural inevitability.

The Perverse Economics of Cybersecurity Expertise

Consider the economic incentives at play. A senior incident response consultant might charge $300-500 per hour for ransomware recovery. Working 60-hour weeks on high-stress engagements, they might earn $200,000-300,000 annually if they're lucky enough to work for a top-tier firm.

Meanwhile, these same professionals watch ransomware groups collect millions in a single attack. They see companies pay $10 million ransoms without blinking. They know intimately which security controls actually matter and which are security theater. Most importantly, they understand that most organizations have terrible detection capabilities and even worse forensic readiness.

The math is simple: why spend years building a consulting practice when you can make more in a single successful attack than in a decade of legitimate work?

We've created a knowledge asymmetry where the defenders know exactly how lucrative and relatively low-risk offense can be. Then we act surprised when some of them make the rational economic choice.

The Hiring Contradiction

The cybersecurity industry faces a fundamental hiring contradiction that we refuse to acknowledge. We need people who think like attackers to build effective defenses. This means hiring individuals who either have extensive knowledge of criminal methodologies or the capability to acquire it quickly.

But here's the problem: the same psychological traits that make someone an effective penetration tester or incident responder,curiosity about systems, willingness to break rules, comfort with ambiguous ethical boundaries,are exactly the traits that correlate with insider threat risk.

Traditional background checks are useless here. Most cybersecurity professionals don't have criminal records. They have computer science degrees, professional certifications, and glowing references from previous employers. The standard hiring process is optimized to detect past bad behavior, not future susceptibility to economic temptation.

Meanwhile, we've professionalized the study of cybercrime to an unprecedented degree. Threat intelligence analysts spend their days cataloging attack methodologies. Incident responders develop intimate knowledge of ransomware deployment techniques. Negotiators learn the psychology of extortion.

We're essentially running a graduate school for cybercrime and then acting shocked when some students apply what they've learned.

Access Controls That Would Make Fort Knox Blush

Perhaps most damning is how the industry handles access controls for its own professionals. Companies that would never give a junior accountant unsupervised access to financial systems routinely grant cybersecurity consultants administrative access to their most critical infrastructure.

Incident response teams regularly receive domain administrator credentials, VPN access to sensitive networks, and copies of critical business data. The justification is always the same: "They need this access to do their job effectively." But this ignores a basic security principle: access should be proportional to both need and risk.

The risk calculation for cybersecurity professionals is fundamentally different than for other roles. A corrupt accountant might embezzle hundreds of thousands of dollars. A corrupt incident response consultant can hold entire companies hos

Yet our access control frameworks treat them identically. We've taken the role with perhaps the highest potential for catastrophic insider threats and given it the least restrictive access controls.

The Trust-Based Delusion

The cybersecurity industry operates on what I call "trust-based security" when it comes to its own practitioners. We assume that because someone has cybersecurity training and certifications, they can be trusted with unprecedented access to critical systems.

This assumption fails basic risk management principles. In every other high-stakes industry,finance, defense, pharmaceuticals,access is based on continuous verification, not initial trust. Investment firms don't just trust traders not to engage in insider trading; they monitor every trade, implement cooling-off periods, and maintain detailed audit trails.

But cybersecurity firms regularly send teams of consultants into client environments with minimal oversight, limited monitoring, and access that would make a system administrator jealous. We've built an honor system for an industry where the stakes couldn't be higher.

The Goldberg-Martin case demonstrates why this trust-based model is fundamentally broken. Both individuals had legitimate reasons to access client systems, understand ransomware methodologies, and maintain relationships with criminal enterprises (for negotiation purposes). The same access and knowledge that made them valuable employees also made them perfect insider threats.

The Counterargument: Market Forces and Professional Standards

Critics will argue that market forces naturally prevent this problem. Cybersecurity firms that experience insider threats will lose clients and go out of business. Professional certifications and industry standards create ethical frameworks that most practitioners respect. The vast majority of cybersecurity professionals are ethical actors who would never consider crossing the line into criminal activity.

These arguments have merit. The cybersecurity industry does have professional standards, and most practitioners are genuinely committed to defending rather than attacking systems. Market reputation is indeed crucial for cybersecurity firms, and a major insider threat incident can be business-ending.

But this counterargument misses the scale of the problem. The cybersecurity industry is growing faster than our ability to properly vet and monitor practitioners. The skills shortage means companies are prioritizing technical ability over character assessment. The economic incentives for criminal activity are growing faster than the industry's ability to create countervailing ethical frameworks.

Most importantly, market forces only work after problems become visible. The insider threat problem in cybersecurity is largely invisible because successful attacks are often attributed to external actors. How many "sophisticated external attacks" are actually insider jobs that we've failed to detect?

What This Means for Security Partnerships

The implications of this analysis are profound for how organizations should approach cybersecurity partnerships. The traditional model of "hire the best cybersecurity firm and trust them completely" is no longer tenable.

Organizations need to implement the same zero-trust principles for cybersecurity vendors that they're implementing for their own infrastructure. This means continuous monitoring of consultant activities, time-limited access grants, and forensic readiness specifically focused on detecting insider threats from security partners.

More fundamentally, organizations need to price insider threat risk into their cybersecurity investments. The cheapest incident response firm might also be the riskiest from an insider threat perspective. Companies should be asking tough questions: How do you vet your employees beyond initial hiring? What ongoing monitoring do you conduct? How do you prevent consultants from maintaining unauthorized persistent access to client networks?

The industry also needs new professional standards specifically designed for roles with high insider threat potential. This might include mandatory cooling-off periods between working on ransomware incidents and leaving the industry, ongoing financial monitoring similar to what exists in the financial sector, and formal ethical review processes for practitioners who work closely with criminal methodologies.

The Uncomfortable Truth

The cybersecurity industry likes to position itself as the immune system of the digital economy. But immune systems can develop autoimmune disorders where they attack the very systems they're meant to protect.

The Goldberg-Martin case isn't a story about bad individuals making poor choices. It's a story about an industry that has built a business model fundamentally incompatible with its own security requirements. We've created roles that require intimate knowledge of criminal techniques, granted those roles unprecedented access to critical systems, and then relied on professional ethics and market forces to prevent abuse.

This approach has worked only because the industry is relatively young and the economic incentives for criminal activity have been relatively modest compared to other opportunities. But as ransomware payments continue to grow and cybersecurity professionals become more sophisticated in their understanding of attack methodologies, the structural problems will only get worse.

The next Goldberg-Martin case isn't a matter of if, but when. And when it happens, we'll have no one to blame but ourselves for building a system that makes such betrayals not just possible, but inevitable.

The question isn't whether we can trust cybersecurity professionals. The question is whether we can trust the system we've built to manage them.

,-

Tags: cybersecurity, insider-threats, ransomware, incident-response, security-industry

The Fox Guarding the Henhouse: How Cybersecurity's Business Model Creates Its Own Worst Nightmare | Harwood Labs