Embedded Expertise

Risk-Based Assessment Does Not Secure Products

Recently, I’ve witnessed a growing number of risk-based cybersecurity assessment trainings emerge, while far fewer initiatives focus on practical cybersecurity engineering. As a result, many product owners and decision-makers are trained to think in terms of compliance, certification, and documented risk very early in a project, at the expense of concrete engineering practices that actually reduce exposure. This series of articles aims to put things back in the right order.

In this article, I (the author) have a practical bias toward EBIOS because it is widely used locally, particularly in France, and frequently encountered in real projects. EBIOS, developed and promoted by ANSSI, is a structured, multistep risk-based assessment methodology. However, the observations made in this article apply to any risk-based assessment approach when it is used alone or too early in the product life cycle.

These articles are neither pro nor anti risk-based assessment, and certainly not pro or anti EBIOS. They are pro real-world security, and critical of the misuse of assessment methods and the unnecessary bureaucracy that often replaces effective engineering work.

This first article explains:

  • what risk-based assessment processes are designed to do,

  • why they primarily apply to deployed or architecturally fixed products,

  • how they are often misused as a substitute for security engineering,

  • and why attackers routinely exploit weaknesses that were never considered “important risks”.

The second article focuses on what experienced engineering teams do instead: how they build security into products through architecture, exposure control, containment, and maintenance, while keeping future certification and assessment requirements in mind without being driven by them.

Risk-based assessment can be useful. It can help organize remediation efforts and communicate priorities. But risk-based assessment does not secure products. Security is achieved through engineering decisions, long before risks are written down.

What Risk-Based Assessment Is (and Where It Applies)

Risk-based cybersecurity assessment processes are designed to analyze an existing situation. Their goal is to identify assets, model threats and attack scenarios, estimate potential impacts, and prioritize risks based on likelihood and severity. The output is typically a structured view of exposures and a set of recommended security objectives or mitigation actions.

These approaches are widely used across industries and standards. While terminology and formalisms differ, the underlying logic is largely the same:

Given a system, its environment, and its constraints, what could go wrong, and how bad would it be?

This framing implies a crucial prerequisite: the system must already be known. Risk-based assessment assumes that:

  • the product architecture is defined,

  • interfaces and data flows exist,

  • exposure to users, networks, and external systems is real,

  • operational constraints are fixed or difficult to change.

In that sense, risk-based assessment is fundamentally post-design. It applies best to:

  • deployed products,

  • systems close to deployment,

  • legacy products undergoing review,

  • environments where architecture changes are costly or limited.

Methods such as EBIOS, commonly used in France, fit squarely in this category. They reason about what exists, not about what could still be designed differently.

This distinction matters. When a product already exists, risk-based assessment helps answer a legitimate question: given our current product, where should we focus our remediation efforts first?

But when applied too early, before architecture and exposure are fixed, the same question becomes misleading. At that stage, security issues are not risks yet; they are design choices waiting to be made. Treating them as risks prematurely shifts the discussion from engineering trade-offs to speculative scenarios and paperwork.

Risk-based assessment is therefore not a design tool. It is an analysis tool, meant to evaluate the consequences of past decisions, not to replace the act of making sound engineering decisions in the first place.

Practical Constraints of Heavy Risk-Based Methods

Methods such as EBIOS, developed and promoted by ANSSI, are heavy by design. They rely on multistep workshops, formal modeling, and extensive documentation, and they typically require a trained facilitator to be applied correctly. In practice, this limits their adoption to larger organizations with sufficient time, budget, and organizational bandwidth.

Ironically, these same organizations are also the most exposed to misuse. Large companies tend to place strong emphasis on audits, traceability, compliance evidence, and documentation, sometimes at the expense of timely and practical engineering action. In such environments, the completion of a formal assessment can become a success criterion in itself, even when concrete security improvements are delayed, diluted, or never deployed.

This dynamic helps explain why risk-based assessment is so often treated as a substitute for security work rather than as a support to it.

Assessment Does Not Fix Anything

Risk-based assessment processes are often perceived as a security activity, but in practice they are analytical activities. They produce descriptions, classifications, and prioritizations. They do not remove vulnerabilities, reduce exposure, or block attacks.

This distinction is obvious at a technical level, yet it is frequently blurred at the organizational level. Completing a risk assessment, producing a report, or validating a set of risk scenarios can create a strong sense of progress, even though the product itself remains unchanged.

This is where a dangerous confusion arises: visibility is mistaken for protection.

A documented risk does not close an open port; A validated scenario does not infer TLS termination; A risk acceptance decision does not prevent exploitation. A risk is a risk is a risk.

In many organizations, especially those operating under compliance or certification pressure, the assessment itself becomes the deliverable. Once risks are identified, categorized, and formally accepted, the security effort is perceived as complete, even if no engineering work has followed. This is not a failure of risk-based methods themselves, but of how they are positioned and rewarded. When assessment is treated as an end rather than a means, it naturally displaces engineering effort.

The result is a form of paper security: a product that is well documented from a risk perspective, yet still trivially exploitable in practice.

This gap between assessment and remediation is not theoretical. I have encountered it repeatedly in real projects.

True Story

Cybersecurity: Done (aka Never Deployed)

I once worked with a customer whose product was approaching deployment and had already gone through an externalized risk-based assessment, including a full EBIOS exercise. They contacted me afterward to review the report and help define a remediation roadmap.

The assessment itself was thorough. It identified:

  • more than 1,000+ CVEs,

  • multiple unnecessary open network ports,

  • the use of insecure protocols,

  • and several other architectural and configuration issues.

None of these findings required complex redesign. The developers could address most of the concerns using small, targeted, and efficient fixes: tightening configurations, disabling unused services, updating components, and reducing exposure.

About nine months later, well after the product had been deployed, I happened to see an internal progress report. One line stood out:

Cybersecurity: done ✅

In reality, the remediation work was still sitting in development branches, unmerged. The product had been deployed for months with most of the fixes never integrated into production.

From a documentation and assessment standpoint, cybersecurity appeared “completed”. From an engineering standpoint, nothing had changed where it actually mattered.

A textbook example of paperware cybersecurity.

Attackers Exploit Opportunities, Not Risk Scores

Risk-based assessment processes implicitly assume that attackers behave rationally: that they select targets based on impact, intent, and carefully weighed effort. Risks are therefore ranked, filtered, and sometimes accepted under the assumption that lower-rated scenarios are less likely to matter.

In practice, this is not how many attacks happen.

A large portion of real-world attacks are opportunistic. Attackers scan broadly, looking for exposed services, outdated components, weak configurations, default credentials, or missing isolation. When they find something exploitable, they exploit it. The decision is driven by opportunity, not by a formal evaluation of business impact or strategic value.

This has an important consequence: an issue deemed “low risk” on paper is still the entry point for a real compromise.

Risk-based approaches tend to downplay or discard scenarios that score low on likelihood or impact. From an assessment perspective, this is logical and even necessary to keep the analysis tractable. From an attacker’s perspective, it is irrelevant. If a weakness exists and is reachable, it will eventually be tested.

This is particularly visible in products exposed to the internet. Automated scanning, commodity exploit kits, and unsophisticated attackers do not distinguish between “critical assets” and “acceptable residual risk”. They simply exploit what responds.

As a result, risk-based assessments can unintentionally hide threats:

  • vulnerabilities that are widespread but considered low impact,

  • services that are exposed “temporarily” or “for convenience”,

  • legacy protocols kept for compatibility,

  • components with known issues but accepted due to mitigation assumptions.

These weaknesses often remain unfixed precisely because they were not prioritized as important risks. Yet they are exactly the kind of weaknesses opportunistic attackers rely on.

This disconnect is also why arguments such as “we are not a target” consistently fail in practice. Attackers do not need a reason to target a specific product. Being reachable is enough. This pattern was previously discussed on this website including in The Fallacy of Not Being a Target.

Risk-based assessment remains useful to structure discussions and allocate limited resources. But it should not be confused with an attacker model. Attackers do not see risks. They see surfaces, paths, and weaknesses.

When security decisions are driven solely by risk scores, organizations end up optimizing documentation instead of reducing attack opportunities. And that is a trade-off attackers are happy to accept.

Where Risk-Based Assessment Actually Helps, and Where It Should Stop

Risk-based assessment is not useless. When applied in the right context, it can be a valuable decision-support tool. The problem is not the existence of risk-based methods, but the scope they are given.

For products that are already deployed, or whose architecture and exposure are largely fixed, risk-based assessment helps answer a practical question: given our current system, where should we focus remediation efforts first?

In that context, risk-based approaches can:

  • help prioritize remediation work when resources are limited,

  • structure a remediation roadmap over time,

  • justify security investments and trade-offs,
  • support discussions with non-technical stakeholders.

Used this way, risk-based assessment complements engineering work. It helps order fixes, not decide whether a weakness should be fixed at all.

Where risk-based assessment should stop is at design authority. It should not:

  • justify leaving exploitable weaknesses in place,

  • replace architectural decisions with risk acceptance,

  • be used as a proxy for secure design,

  • serve as a closure criterion for cybersecurity work.

A vulnerability that is exploitable remains a problem, regardless of how it was scored in a risk matrix. Attackers do not respect risk acceptance decisions.

This is why risk-based assessment must remain downstream of engineering, not upstream of it. Engineering reduces exposure and constrains attackers. Assessment helps understand what remains.

When the order is reversed, organizations optimize for compliance and documentation instead of resilience. When the order is correct, risk-based assessment becomes a useful lens rather than a false shield.

This distinction is especially important for product teams facing real-world constraints: limited time, limited budgets, and deployed systems that must be secured without being redesigned from scratch.

Risk-based assessment can help you decide what to fix first. It cannot tell you what is safe to leave broken.

Key Takeaway: Putting Risk Assessment Back in Its Place

Risk-based assessment has a legitimate role in cybersecurity. When products are already deployed, when architectures are fixed, and when resources are constrained, it can help structure remediation efforts and support decision-making. Used in that context, it is a useful tool.

Problems arise when assessment is treated as a substitute for engineering, or when it is applied too early in the product life cycle. In those cases, attention shifts from reducing exposure to managing documentation, from closing attack paths to justifying residual risk.

As we have seen, attackers do not operate on risk matrices. They exploit what is reachable, misconfigured, outdated, or insufficiently isolated. No amount of assessment changes that reality. Only engineering does.

This is why risk-based assessment must remain downstream of security engineering. It can help order fixes, communicate priorities, and support certification efforts, but it cannot secure a product on its own.

The next article focuses on what happens before risk assessment becomes relevant: how experienced engineering teams design products to minimize exposure, contain failures, and remain maintainable over time. In other words, how security is actually built.