Security Is Built, Not Assessed
In the previous article, we explored why risk-based assessment alone does not secure products, and how misusing it can lead to paper security and false closure. This article focuses on the other side of the equation: what engineers actually do when they are responsible for shipping and maintaining real products in hostile environments, without losing sight of compliance requirements.
What follows is not a checklist, a framework, or a new methodology. It is a collection of well-known engineering practices that quietly shape secure systems every day, often without being labeled as “security work” at all.
These practices have one thing in common: they reduce opportunities for attackers by design, rather than attempting to justify them after the fact.
Security is an engineering discipline before it is a compliance activity
Experienced engineering teams understand this instinctively. They don’t wait for a risk matrix to tell them whether an exposed service is a problem, or whether an insecure default should be fixed. They design products so that entire classes of problems never appear in the first place.
This does not mean ignoring certification, audits, or risk-based assessments. On the contrary, experienced teams know that such processes are inevitable. But they also know that security outcomes are determined long before any assessment takes place, through architectural choices, exposure control, and operational discipline.
Engineering Hygiene Comes Before Risk Optimization
Experienced engineers treat many security measures as non-negotiable hygiene, not as outcomes of risk calculations. These are practices that are applied systematically because failing to apply them leads to predictable and avoidable problems.
You don’t run a risk analysis to decide whether to brush your teeth. You just do it. Likewise, closing an unnecessary open port, disabling an unused service, or removing an insecure protocol is not a question of probability versus impact. It is elementary security engineering hygiene.
This mindset is fundamentally different from risk-based reasoning. Hygiene is about eliminating obvious opportunities before they can be debated, scored, or accepted. It assumes that if a weakness is reachable, it will eventually be tested and exploited, regardless of how unlikely or low-impact it appears on paper.
That is why experienced teams systematically:
reduce exposed interfaces,
disable what is not strictly required,
remove insecure defaults,
and prefer simple, auditable configurations over clever but fragile ones.
None of this requires a risk workshop. It requires discipline, experience, and a clear understanding of how real-world systems fail under pressure.
Risk-based assessment still has a role, but it comes after hygiene. Once the obvious doors are closed, assessment helps reason about what remains. Used in the opposite order, it tends to justify leaving those doors open.
Secure Products Must Be Maintainable
Experienced engineering teams assume that vulnerabilities will be discovered after deployment. This is not a pessimistic view; it is a realistic one. New vulnerabilities emerge continuously, dependencies evolve, and attack techniques improve. A product that cannot be updated is a product that will eventually be exposed.
For this reason, maintainability is treated as a core security requirement, not an operational afterthought. In practice, this means having a reliable and secure over-the-air (OTA) update mechanism from day one.
OTA is not about convenience. It is about control over the deployed fleet:
the ability to patch vulnerabilities,
to revoke insecure configurations,
to respond to newly discovered issues without physical access.
Without OTA, every vulnerability becomes a permanent liability. With OTA, vulnerabilities become manageable engineering tasks.
Crucially, the update mechanism itself must be secure. An OTA system that can be abused is worse than no OTA at all. Experienced teams ensure that:
update packages are cryptographically authenticated,
integrity and authenticity are verified before installation,
rollback and failure modes are controlled,
update paths are explicit and auditable.
This is not an optional hardening step. It is part of the security boundary. A compromised update channel is a direct compromise of the product.
This is also where engineering-driven security diverges sharply from assessment-driven security. A risk register may document that a vulnerability exists and has been accepted. A secure update mechanism allows that decision to be reversed when reality changes, which it inevitably does.
Experienced teams design update mechanisms early, test them continuously, and treat them as safety-critical components. They know that a system that cannot be updated securely cannot be secured over time.
This topic is explored in more detail in related articles published on emb-exp.com/stories, including OTA Update Tools: Find the Perfect Fit for Your Application.
Assume Compromise: Containment, Immutability and Blast Radius
Experienced engineering teams do not design systems under the assumption of perfect prevention. They assume that, despite hygiene, updates, and reviews, some vulnerabilities will be exploited. The question then becomes not if, but how much damage is possible.
This is where the notions of blast radius and containment becomes a primary design goal.
The blast radius of a failure or compromise is the scope of impact it can have on the system: which components are affected, what privileges are gained, what data is exposed, and how far the effects can propagate. Reducing blast radius is a core security objective for experienced teams.
Rather than relying on a single perimeter or control point, experienced teams structure systems so that:
components have clearly limited responsibilities,
privileges are minimized by default,
failures are contained locally rather than propagating system-wide.
A compromise should not automatically imply full system control.
Immutability plays a key role in this approach. By making parts of the system read-only at runtime, teams reduce the attacker’s ability to persist, modify behavior, or hide their presence. An immutable root filesystem, for example, turns many classes of attacks into short-lived events rather than permanent compromises.
Immutability is not about preventing all change. It is about controlling where and how change is allowed. Updates happen through well-defined, authenticated mechanisms. Runtime modification is restricted as much as possible. This sharply reduces the attack surface and simplifies reasoning about system state.
This mindset reflects a fundamental shift: security is no longer about keeping attackers out at all costs, but about ensuring that no single failure can cause disproportionate damage..
Containment and blast radius reduction also make systems easier to operate and recover. When state is minimized and changes are controlled, restoring a known-good configuration becomes straightforward. This turns incidents into operational events rather than existential crises.
These principles are discussed in more detail in related articles published on emb-exp.com/stories, including Immutability: The Cornerstone of Embedded Defense and Beyond the Firewall: From Perimetric to In-Depth Security.
Containment limits the impact of a compromise within a component or boundary. Defense in depth extends this idea across the entire system, by ensuring that no single failure or bypass is sufficient on its own.
Defense in Depth Is an Architectural Choice
Experienced engineering teams do not rely on a single “strong” security feature. They assume that any individual control can fail, be misconfigured, or be bypassed. Security therefore emerges from the combination of multiple, independent layers, each designed to slow down, constrain, or expose an attacker.
Crucially, defense in depth is not about stacking features. It is about architectural separation and independence:
network exposure is limited even if application-level authentication fails,
filesystem protections remain effective even if a service is compromised,
update mechanisms are protected independently of runtime components,
monitoring and recovery mechanisms assume upstream failures.
Each layer is designed with the expectation that the layer above or below may be compromised.
This is why purely perimetric thinking fails. Firewalls, gateways, and network segmentation are useful, but they are only one layer. Once the perimeter is crossed, systems that rely on it exclusively tend to collapse quickly.
Defense in depth also improves resilience against operational failures, not just attacks. Misconfigurations, partial updates, and unexpected interactions are contained by the same layering principles.
From an engineering perspective, defense in depth is less about “adding security” and more about avoiding catastrophic coupling. When components are loosely coupled and protected by independent barriers, failures degrade gracefully instead of cascading.
These principles are explored in more detail in related articles published on emb-exp.com/stories, including Beyond the Firewall: From Perimetric to In-Depth Security.
Reviews Are a Security Tool, Not a Process Checkbox
Experienced teams do not use reviews to prove that security was considered. They use reviews to prevent bad decisions from becoming permanent. In that sense, reviews are not a quality ritual or a compliance step. They are a security mechanism applied early, when fixes are still cheap and architectural choices are still reversible.
Design reviews are particularly important. Many security failures are not caused by bugs, but by unchecked design decisions: unnecessary exposure, implicit trust boundaries, overly permissive privileges, or optimistic assumptions about how the product will be used and deployed. Reviews force teams to articulate these choices and justify them before they are locked in.
Code reviews serve the same purpose at a different scale. They routinely catch:
insecure defaults,
accidental exposure,
fragile error handling,
misuse of cryptographic or networking APIs,
and security assumptions that are not actually enforced in code.
This is not theoretical. We have repeatedly observed that teams who take reviews seriously eliminate entire classes of issues long before any risk assessment would even notice them. This is why we have written extensively about reviews as an engineering practice, not a process artifact, in articles such as Reviews: The Key to Right-First-Time Designs and Code Reviews: How We Do It and Why It Works.
What experienced teams explicitly avoid is turning reviews into bureaucracy. A checklist filled to satisfy an audit does not improve security. A late-stage review performed after architecture and interfaces are frozen rarely changes outcomes. Reviews are effective precisely because they are lightweight, frequent, and close to the work.
This is another recurring pattern: organizations that over-invest in documentation and post-hoc assessment often under-invest in early review. The result is predictable. Risks are documented, accepted, and traced, while the underlying engineering issues remain untouched.
Reviews influence what gets built. Risk assessments describe what already exists. Confusing the two leads to well-documented failures.
Used consistently, reviews reinforce every principle discussed in this article: hygiene, maintainability, containment, and defense in depth. They are one of the simplest and most effective ways to turn security from a compliance concern into an engineering habit.
Conclusion – Putting Things Back in the Right Order
Across these two articles, we explored two sides of the same problem.
In the first article, we showed why risk-based assessment alone does not secure products. When misused or applied too early, it can create paper security, false closure, and a comforting sense of control that attackers do not share.
In this article, we focused on what experienced engineering teams do instead. They start with hygiene, design for maintainability, assume compromise, contain failures, layer defenses, and review decisions early and often. They do all this while keeping compliance in mind, knowing that certification and assessment will eventually come, but refusing to let those processes dictate engineering choices.
This leads to a simple but critical rule:
Don’t make EBIOS, or any risk-based assessment, a bureaucratic response to an engineering problem.
Risk-based assessment has its place. It helps structure remediation, communicate priorities, and satisfy legitimate governance needs. But it must remain downstream of engineering, not a substitute for it.
Security is not achieved by documenting or accepting risks. It is achieved by reducing exposure, limiting damage, and maintaining control over systems over time. When engineering comes first, risk assessment becomes easier, faster, and more meaningful. When the order is reversed, organizations optimize for audits instead of resilience.
Security is built, not assessed.
Enjoyed this article?
Embedded Notes is an occasional, curated selection of similar content, delivered to you by email. No marketing noise.