The Vasa Didn’t Sink Because Engineering Failed
In 1628, the Swedish warship Vasa sank less than a mile into its maiden voyage. Engineering had identified and tested the problem. The instability had been demonstrated in plain sight, in front of those in charge. The failure came from a management system where that knowledge no longer had any leverage.
The usual explanation for the disaster is familiar: someone knew, but could not speak. Hierarchy, pressure, fear. The cost of telling the truth was too high.
It is a simple explanation, and a convenient one: it places the failure on individuals and leaves the system unquestioned. Simple, convenient, but wrong.
A few days before departure, a stability test was performed. Thirty men ran from one side of the deck to the other. The test had to be stopped. The ship rolled so violently that it was close to capsizing at the dock.
This was not a weak signal. This was not an opinion. This was a physical demonstration. And it was public. Hard to hide, so close to the dock.
And yet, the ship sailed.
Not Silence. Something Worse.
The problem was not simply that people stayed silent.
Some did. Contemporary accounts suggest that challenging the project meant challenging decisions already validated at the highest level, including the king Gustav II himself.
But the deeper issue is this: even when the problem was demonstrated openly, in front of multiple witnesses, there was no path for that reality to alter the decision.
The test did not fail quietly. It failed publicly. And still, the ship sailed.
When Reality Stops Having Consequences
In modern engineering projects, especially in embedded systems, the same pattern appears. Not as an anecdote, rather as a configuration.
You see it when:
- an architecture does not hold under load, but is kept because “we are too far in”
- a platform is known to be inadequate, but is kept because it rewards early milestones while deferring future consequences
- cybersecurity is acknowledged, documented, and postponed indefinitely
- a deep -learning model runs, but not within constraints, and the gap is rationalized instead of addressed
If you have read 13 Clues Your Embedded Project Is in Trouble, these signals are familiar. They rarely hide. They accumulate in plain sight.
The Failure Lies in the System, Not in the People
It would be easy to stop here and conclude: people should speak up. But that is not what is happening.
In these projects, engineers do speak. Meetings happen, concerns are raised, tests fail, reports exist.
There is a human layer:
- hesitation to challenge earlier decisions
- reluctance to reopen architectural choices
- sensitivity to pressure
These are real. They matter. But they are not the root cause. The real failure is that the system made of these people is no longer able to correct itself.
Because at some point:
- earlier decisions become irreversible in practice
- constraints become political instead of technical
- validation happens at the wrong level
- fragmented ownership dilutes responsibility
This is not about courage or commitment. It is about whether the system still allows reality to change its trajectory.
Often, it no longer does.
The Embedded Systems Version of the Vasa
In embedded systems, this becomes very concrete. Here is a real-life example.
A platform is selected early, not for technical reasons, rather for familiarity: x86, Raspberry Pi, a generic SBC.
The system boots quickly. Development is easy. Early demos are convincing.
Then the system grows.
Performance becomes uneven, and worse, uncontrolled. Power consumption is difficult to attribute. Peripheral behavior becomes opaque. Timing assumptions stop holding. Technical debt accumulates fast but decays slowly.
At this point, the human layer is visible again:
- engineers know the platform is no longer appropriate
- teams discuss alternatives, sometimes consulting external specialists
- issues are documented
But the platform remains. Not because no one sees the problem. Not because no one says it. But because the system cannot absorb the correction.
Changing the platform would mean reopening too many decisions at once: architecture, timelines, validation strategy, ownership.
So the system continues. This is exactly the situation where you realize you are riding a dead horse: you are no longer building a system. You are maintaining its trajectory.
Cybersecurity as an Example as Deferred Instability
I can see the same pattern apply to cybersecurity management.
Again, the human layer is present:
- teams are aware of vulnerabilities
- CVEs are tracked
- risks are discussed
Selling Around the Problem
When an assessment highlights more than 1000 vulnerabililties (common), nothing is hidden. But nothing changes either.
Why? Because remediation would require architectural work the system is not prepared to undertake.
Observe how, at some point, the conversation shifts:
“We are not a target.”
I have abundently addressed this here and here. Security becomes contextual, then optional, then deferred.
Then regulation appears. Frameworks like the Cyber Resilience Act formalize expectations.
At that point, the system is evaluated again. But not to change it. Rather to determine how it can still be sold. For example by retargeting to regions where the CRA does not apply.
The vulnerabilities are still there. The environment adapts. The system does not.
In this real business case, security behaves like instability in the Vasa: it is measured, understood, documented. And then worked around as a form of deferral.
The Point of No Return
There is a moment in every failing project where the nature of the work changes.
Before that point, problems are technical. They can be analyzed, understood, and corrected. After that point, problems become systemic.
Fixing them would require revisiting decisions that are no longer considered revisitable: architecture, platform, timelines, ownership. And this is where the irrationality sets in.
Continuing is no longer the lowest-risk option. It is simply the most acceptable one. The system knows it is misaligned. The cost of correction is understood. The consequences of inaction are visible.
And yet, the warnings are ignored and the project moves forward. Not because it makes sense. But because reversing course would expose too much at once.
From there, the objective shifts. The project is no longer trying to converge toward a technically or commercially relevant system. It is trying to reach the next milestone without forcing decisions it cannot absorb or collapsing altogether.
The Vasa Is Not Just a Story
The Vasa did not sink because people stayed silent. Some did, but silence alone does not explain what happened.
The Vasa did not sink because it was poorly engineered. It could have been corrected, based on what was already known.
It sank because a known reality grounded in technical evidence had no remaining path to influence decisions.
This configuration is not rare. It is even reproducible. And once it is in place, the project no longer aims to succeed. It aims to continue.
Until it sinks.
Enjoyed this article?
Embedded Notes is an occasional, curated selection of similar content, delivered to you by email. No strings attached, no marketing noise.