The Hidden ROI of Security: How to Put a Number on the Risks You Stop.

Link copied!
Picture of a woman in a dark background

Sara Duarte

Head of Customer Success

March 12, 2026

Picture this:

Your team finds vulnerabilities, patches them, and moves on. Day after day, you prevent disasters. But when you report to the board, their response is always the same:

“Great job keeping us safe. Now show us the numbers: how much did we spend, and how can we spend less?”

They hear terms like “critical” and “CVE,” but what they see is a line item: cybersecurity as a cost, not an investment. After all, the only number they’re certain of is the one on your budget sheet, not the money you saved them.

The problem? You’re speaking in risks. They’re thinking in returns.

Let me show you how you can speak board language.

The Risk Cost Calculation Paradox

In cybersecurity, we’re constantly asked to answer an impossible question: “How much will we lose if this vulnerability is exploited?”

The event hasn’t happened. It might never happen. And even if it does, the variables are endless: Who’s attacking? When? How severe will the impact be? Is it a minor data leak or a full-blown ransomware shutdown? The answers are buried in uncertainty, yet we’re expected to assign a precise number to the unknown.

This is the quantification paradox. We’re tasked with measuring the cost of something that exists only as a possibility, a shadow on the horizon. Unlike traditional business risks (where historical data or market trends offer guidance), cyber threats evolve daily. A critical vulnerability today could be obsolete tomorrow, or it could be the one flaw an attacker exploits next week. The tools we use to calculate risk, probabilities, threat intelligence, even past incidents, are educated guesses. But without that number, security teams struggle to justify budgets, prioritize fixes, or prove their value. We’re asked to predict the unpredictable, then defend the prediction as fact.

And here’s the kicker: The board doesn’t care about possibilities. They care about impact. So we’re left bridging the gap between “This could be bad” and “Here’s exactly how bad (and how much it’ll cost).”

Busting Paradoxes

To tackle this paradox, we turn to risk-based financial models, structured frameworks designed to assign value to the unquantifiable.

These models, endorsed by standards like NIST, ISO 27005, and FAIR, don’t eliminate uncertainty. Instead, they provide a repeatable way to estimate potential losses by combining threat likelihood, vulnerability severity, and business impact.

Think of it as building a financial case for a storm that hasn’t hit yet: you analyze the clouds (threat intelligence), check the forecast (historical data), and calculate how much it might cost to board up the windows (mitigation efforts). The goal isn’t perfection, it’s defensible reasoning that turns “What if?” into “Here’s what we’re preparing for.”

The key lies in transparency. By documenting assumptions (e.g., “We estimate a 15% chance of exploitation based on X threat trends”) and using industry benchmarks, we create a language both security teams and executives can understand. It’s less about predicting the future and more about making informed bets on where to invest today to avoid larger costs tomorrow. And when done right, this approach does more than justify spend, it transforms security from a cost center into a strategic advantage.

Step by Step

First, we need to translate every vulnerability we found into:

  • Exposure Factor [EF]: proportion of the asset at risk.
  • Single Loss Expectancy [SLE]: cost of a single exploit.
  • Annualized Rate of Occurrence [ARO]: probability per year.
  • Annual Loss Expectancy [ALE]: yearly risk cost if the vulnerability is not mitigated

This transforms the uncertainty into predictable, defensible financial numbers.

Asset Value [AV]

We all know not every asset carries the same weight. A public webpage? Important, but probably not existential. Customer databases or proprietary code? That’s a different story.

If you haven’t already, map out your assets by criticality. Start simple: Low, Medium, High. If you can go deeper, assigning actual financial or operational value to each, that’s the gold standard, but let’s be real: Most teams aren’t there yet, and that’s okay.

In the Ethiack Portal, we make this step easier. When you add an asset, you’ll tag it with an importance level (Low/Medium/High). The labels are yours to define (what’s “High” for a fintech startup might look different for a manufacturing plant) but here’s how we guide customers to think about it:

Low-value

  • informational websites
  • static pages
  • dead endpoints
  • low-risk dev environments
  • public-facing shells with no data
  • endpoints without business logic

These assets can have incidents, but business impact is limited.

Medium-value

  • APIs without sensitive data
  • dev/stage platforms
  • non-critical internal systems
  • subdomains with authenticated UI but low-value data
  • public applications with no payments
  • infrastructure components without customer info

These represent material impact but not catastrophic.

High-value

  • identity & SSO systems
  • HR systems
  • financial applications
  • e-commerce + payments
  • critical production APIs
  • systems containing personal data or customer accounts

These systems cause reputational damage, legal exposure, financial losses, stakeholder impact or operational disruptions.

Since these values depend entirely on your business, we use default benchmarks to keep things practical: Low = €25K, Medium = €50K, High = €100K. Of course, a “High” asset for one company might justify €1M, while another caps it at €100K. It’s all about context. But for most teams, starting with these baselines strikes the right balance between simplicity and realism.

Exposure Factor [EF]

In quantitative risk analysis, the Exposure Factor represents the percentage of an asset's value that would be lost or compromised during a successful security incident, expressed as a decimal between 0.0 (0% impact) and 1.0 (100% total compromise).

When assigning the EF to your attack surface lists, we use a defensible, threat-actor-centric methodology based on four primary criteria:

1. Data Sensitivity (Confidentiality)

What type of data passes through or resides on the asset?

  • Highest EF: Financial transactions (PCI), core authentication tokens, highly sensitive PII (KYC documents, passports), and corporate HR/payroll data.
  • Lowest EF: Public marketing copy, open-source mapping data, or blank template files.

2. Operational Criticality (Availability & Integrity)

How vital is this asset to the company's ability to generate revenue or function?

  • Highest EF: Core payment gateways, Order Management Systems (OMS), ERPs (Navision), and root e-commerce domains. If these go down, revenue drops to zero instantly.
  • Lowest EF: Anniversary microsites, internal project blogs, or isolated test nodes.

3. Environment Type

Threat actors (and risk frameworks) heavily weigh the environment lifecycle stage.

  • Production (0.75 - 0.95): Live environments processing real customer data and actual money.
  • Staging/QA (0.45 - 0.60): Pre-production environments. They rarely contain live user data but are extremely valuable for reverse-engineering production logic or finding unpatched vulnerabilities.
  • Development/Sandbox (0.35 - 0.45): Highly volatile, usually isolated, containing mock data.

4. Network Placement & Exposure

Where does the asset sit topologically?

  • External/Internet-Facing: Higher EF due to universal accessibility.
  • Internal: Capped at a baseline EF of 0.20 from an external perspective. Unless a perimeter gateway (like a VPN) is breached first, the direct external risk is limited strictly to information disclosure (leaking the network topology).

So here we arrive to our first calculation:

Single Loss Expectancy: The estimated financial loss resulting from a single successful exploitation of a vulnerability.

Annual Rate Occurrence [ARO]

Knowing the cost of one incident isn’t enough because risk isn’t static. Security is a moving target, and what matters is the year-long exposure: What’s the real cost of leaving this vulnerability unpatched for 12 months?

That’s where Annualized Rate of Occurrence (ARO) comes in. It estimates the likelihood a vulnerability will be exploited in a given year, based on:

  • How often attackers target this flaw (Is it a common attack vector?),
  • How hard it is to exploit (Does it require advanced skills, or can script kiddies do it?),
  • Attacker motivation (Are bots already scanning for it? Is it a favorite in ransomware toolkits?)
  • Real-world evidence (Has it been exploited before? Is it on CISA’s KEV list?).

This turns guesswork into a data-backed probability so you’re not just asking “Could this happen?” but “How likely is it to hit us this year?”

ARO is technically a probability between 0 and 1, but let’s be honest: Claiming anything above 30% in cybersecurity is wishful thinking. The field moves too fast, attackers adapt too quickly, and false precision leads to bad decisions.

So instead, we use likelihood bands.

ARO
Meaning
Equivalent Frequency
0.10
Rare
Once every ~10 years
0.12
Low
Once every ~8.3 years
0.15
Occasional
Once every ~6.7 years
0.20
Possible
Once every ~5 years
0.23
Moderately Likely
Once every ~4.3 years
0.25
Probable
Once every ~4 years
0.30
Frequent
Once every ~3.3 years

Annualized Loss Expectancy (ALE)

We are ready to calculate our final value:

This is the number you can confidently stand behind: the realistic damage your vulnerability could cause if exploited.

Let’s make this concrete. Suppose you are our customer and Ethiack flags:

  • Vulnerability: User Enumeration (CVE-204)
  • Asset Value: €50K (Medium importance)
  • Exposure Factor: 65% (If exploited, €32.5K of that asset is at risk).
  • ARO: 0.15 (Expect an attempt every ~6.7 years).

Now, plug in the numbers:

Now it stops being hypothetical. That's the annual cost of ignoring this risk.

Why does it matter?

For executives, ALE turns security from a black box into a business decision. Instead of hearing “We found a critical vulnerability,” they see:

“This risk costs us €4,875/year to ignore.”

“Fixing it now saves €24K over five years.”

“Here’s how it compares to other risks we’re tracking.”

So now you can turn the conversation around cybersecurity from avoiding disasters to optimizing spend, prioritizing investments, and protecting revenue. That’s a language every board understands.

The Cost of 2026 is rising

So how much did we protect in 2025? Over €93 million for our customers!

And in 2026? That number is accelerating, fast. The cost of inaction is compounding.

Don’t wait for the attack.

Secure Your Future with Ethiack

Try Ethiack

If you're still unsure convince yourself with a 30-day free trial. No obligation. Just testing.

signup(datetime.now());

def hello(self): print("We are ethical hackers")

class Ethiack: def continuous_vulnerability_discovery(self: Ethiack): self.scan_attack_surface() self.report_all_findings() def proof_of_exploit_validation(self: Ethiack): self.simulate_attack() self.confirm_exploitability() self.validate_impact()

while time.time() < math.inf: ethiack.map_attack_surface() ethiack.discover_vulnerabilities() ethiack.validate_exploits() ethiack.generate_mitigations() ethiack.calculate_risk() ethiack.notify_users() log.success("✓ Iteration complete")

>>> show_testimonials() They found vulnerabilities no one else did. Fast, real, and actionable results. It's like having a red team on call. >>> check_socials()

signup(datetime.now()) meet(ethiack)

def actionable_mitigation_guidance(ethiack): ethiack.generate_mitigation_steps() ethiack.prioritize_fixes() ethiack.support_teams() def attack_surface_management(ethiack): while time.time() < math.inf: ethiack.map_attack_surface() ethiack.monitor_changes() def quantifiable_risk_reduction(ethiack): ethiack.check_risk_metrics() ethiack.calculate_delta() return ethiack.report_real_risk()

Activate AI penTesting

Start a Free 30-day trial
Ethiack — Autonomous Ethical Hacking for continuous security Continuous Attack Surface Management & Testing