How the CRA May Standardize Failure
The Cyber Resilience Act is supposed to improve the security of products sold in Europe.
This contrarian article argues that the relationship between regulation and actual security is weaker than it appears.
Compliance has a cost. It requires time, expertise, documentation, audits, and tooling. These are not marginal efforts. They shape how products are designed, how teams operate, and who can afford to enter the market. The induced costs do not translate into innovation and disproportionately affect smaller ventures.
That extra cost would be easy to justify if it translated directly into product value. The question is whether it improves security in proportion to what it costs.
What compliance actually optimizes
Once rules are defined, engineers optimize within them. In a regulatory context, the objective is no longer only to build a secure system, but to build a system that can be shown to comply.
That distinction matters.
Effort shifts toward producing evidence: traceability, documentation, audit readiness. These are necessary in a regulated environment, but they do not directly strengthen the system. They make it legible.
Over time, organizations converge toward similar interpretations of the rules. The same processes, templates, and mitigation patterns appear across projects.
Compliance creates alignment.
From a regulatory standpoint, alignment simplifies audits, enforcement, and the handling of non-conformance. It also shifts the burden of proof toward vendors, who must demonstrate compliance rather than regulators having to demonstrate failure.
This makes the framework efficient to enforce. Whether it is equally efficient at improving real systems is a separate question.
From a systems perspective, it creates shared assumptions, reduces diversity in design, and leads to shared failure modes that concentrate risk instead of spreading it.
The blind spot: what cannot be measured
This model works well for what can be observed and tracked. Known vulnerabilities fit naturally into it: they can be identified, referenced, patched, and reported. This creates a clear, structured way to reason about security.
But it only captures part of the problem.
Many of the weaknesses that matter most are not individual flaws waiting to be cataloged. They come from design decisions. An architecture that exposes too much, a misplaced trust boundary, an interface that was never meant to be reachable. Sometimes these are conscious trade-offs made under constraints. Sometimes they are simply poor decisions. In both cases, they are embedded in the system and do not appear as discrete, reportable issues.
Focusing on what can be measured tends to shift attention toward these visible artifacts, while the underlying structure of the system receives less scrutiny. The result is not that these deeper issues disappear, but that they are treated as secondary.
Security, however, is not only about resisting attacks. It is also about defining how a system behaves once resistance fails. Whether a compromise spreads or remains contained, whether critical functions degrade gracefully or collapse, depends on architectural choices that exist independently of any listed vulnerability.
These aspects are harder to formalize and do not fit easily into standardized frameworks. Yet they are often what determines the real impact of a breach.
In that sense, a product can be fully compliant and still be a ticking bomb by construction.
From alignment to shared blind spots
Combine these two effects and a pattern emerges. On one side, effort is directed toward satisfying the rules. On the other, the rules only cover what can be formalized.
Everything outside that perimeter receives less attention.
Because organizations converge toward the same interpretations, they also converge toward the same omissions.
In complex systems, that convergence creates shared blind spots.
If a class of problems is not captured by the framework, it does not disappear. It propagates across all compliant systems. Failure becomes consistent.
A partial view of security
This is particularly visible in how security is framed. The CRA emphasizes prevention: identifying vulnerabilities, managing them, ensuring updates.
But prevention is never complete. Vulnerabilities are discovered and exploited every day, breaches happen.
When they do, the critical question is not only how the vulnerability occurred, but how far it can propagate. What is the blast radius? What are the containment boundaries?
These are architectural properties. They depend on isolation, partitioning, privilege separation, and control of data flows.
They are harder to standardize and harder to audit, which makes them less central in regulatory frameworks that prioritize enforceability.
The result is a bias: security is treated primarily as preventing known issues, rather than limiting the impact of unknown ones.
The role of incentives
At this point, the problem is not only technical, it is also economic. Compliance has a predictable cost. It can be planned and justified.
The benefit, in terms of actual resilience, is less direct.
This creates a shift in incentives. Compliance becomes a goal in itself, because it is what is measured and enforced.
A system that passes audits is acceptable, even if its underlying design remains fragile.
When enforcement is uneven, security becomes negotiable
Consider a common situation, inspired by a real case.
A product with safety implications is reviewed from a security perspective. Insecure protocols are identified, with secure alternatives available. Weak authentication mechanisms are found, and stronger options are proposed. Known vulnerabilities are present, with clear remediation paths.
There is no ambiguity, and no major technical barrier to fixing them. Nothing beyond routine remediation.
And yet, no action is taken. The product just ships into markets where enforcement is weaker.
From a business perspective, the decision is rational. Remediation has a cost. Delays have a cost. Different markets impose different constraints.
Notice the shift: security becomes a variable in that equation.
This is not a failure of knowledge. It is a reflection of incentives.
Regulation like the Cyber Resilience Act can influence those incentives within its scope. It does not remove them.
Compliance is a business
Around any regulation, an ecosystem develops. Audits, certification processes, documentation frameworks, consulting services, and tools all emerge to support compliance.
This is expected.
But it reinforces the dynamic: resources are allocated to activities that make compliance visible and verifiable. Not all of these activities translate into better systems.
Compliance generates economic activity, and therefore additional tax revenue. It does not guarantee proportional improvements in security.
A naive alternative: accountability instead of prescription
Ensuring security is first and foremost an engineering problem. It is expressed in architectures, trade-offs, and concrete system behavior. It cannot be reduced to documentation, nor fully captured by process.
This does not mean the absence of structure. Many quality assurance frameworks operate differently: they do not prescribe every solution. They define responsibilities and accountability. They require that decisions be justified, traceable, and defensible.
In practice, especially in highly regulated environments, there is a natural tendency to formalize, document, and standardize. This improves auditability and control, but it also shifts attention from the expected outcomes toward the process itself.
An accountability-driven approach takes a different path. Engineers remain at the center, not as executors of predefined rules, but as accountable decision-makers.
In this new approach, engineers would remain free to design systems and make trade-offs based on context. The evaluation would shift from checking compliance to assessing whether decisions are defensible.
Not every mistake would be penalized. Complex systems will always produce them. But negligent designs, known bad practices, and avoidable risks would be.
This approach is admittedly harder to automate. It requires expertise on the side of the controlling bodies and accepts variability.
The distinction between enforced process and accountability matters.
A system can comply with a process and still be poorly designed. It is much harder to justify a poor design when accountability is tied to outcomes. It also aligns incentives more directly with outcomes.
It reflects a broader idea: rules can guide, but they cannot replace judgment, as discussed in Rules Bind The Fool and Guide The Wise.
Key takeaways
The Cyber Resilience Act attempts to improve security by structuring how it is achieved and demonstrated.
This creates a shift: from optimizing systems to optimizing compliance.
Because compliance focuses on what can be measured, it leaves part of the problem unaddressed. Because organizations align on the same rules, they also align on the same blind spots.
At the same time, the cost of compliance is real, while its impact on actual resilience is uneven.
The result is a gap between what is enforced and what matters.
Filling that gap with more rules is one possible path. It is not necessarily the most effective one.
Enjoyed this article?
Embedded Notes is an occasional, curated selection of similar content, delivered to you by email. No strings attached, no marketing noise.