Embedded Expertise

Code Reviews: How We Do It and Why It Works

At Embedded Expertise, we’ve already shared how we approach hardware design reviews. This time, let’s look at the software side and show how good code review practices help improve quality and teamwork.

Over the years we’ve refined a simple, pragmatic approach to code reviews that fits perfectly with our embedded‑systems projects.

This isn’t meant to be an authoritative guide, just the good practices we’ve developed and use every day.

Most of it applies to any software activity, but we make no apology for our bias: our world is embedded systems.

What a Code Review Is (and Isn’t)

A code review is a peer review: a joint effort between peers to check code quality, catch defects, and share knowledge.

It is not a validation of the architecture or design decisions. If the design itself is questionable, that conversation should have happened much earlier. A review assumes you’re implementing an agreed‑upon design.

A review is also there to fight blind spots. Even experienced developers become blind to their own mistakes. A fresh set of eyes is invaluable.

When to Run a Code Review

In modular programming, we run a review after a module is developed and unit tested.

Don’t submit a half‑baked, messy work in progress to your colleague. Before asking for a review, you should have tested enough to convince yourself that the module is ready for integration.

A practical trigger is often:

“My branch is ready to merge, but before merging I want a review.”

And when a colleague requests a review from you: do it immediately.

They probably have a backlog and will move on to something else while waiting. If you delay, they may forget key details, and late change suggestions become harder to handle with time. Not to mention the headaches of rebasing and merging older branches.

Act now.

How to Prepare for a Review

In this section we assume you’re the developer requesting a review of your work.

First rule: Use the right tool. The tool you use for reviews can make or break the process.

If GitLab is your thing, we can recommend using its Merge Request feature: it integrates the branch, the commit history, a built‑in description area, and a threaded discussion in one place. It becomes a complete vehicle for the review process, keeping everything visible and traceable.

If your workflow is more ticket‑oriented, a tool like Jira can complement this by providing the broader context and links to the merge request.

Once you have the right tool in place, prepare your review request so the reviewer can work efficiently:

  • Purpose of the module or branch: what problem it solves and how it fits into the bigger picture.
  • Context and links: point to the relevant piece of code, documentation, or ticket in GitLab, Jira, or wherever your team keeps its truth.
  • How the module works. Some design decisions may be intricate. Smart choices are not always obvious and can be hard to understand from the outside. Include high‑level implementation details so the reviewer can grasp the structure without losing their mind.
  • How to use the module. If there’s a trick, a special build option, or an online help included, tell the reviewer, don’t make them guess. The objective is to move forward, not to impress with obscure regular expressions.
  • Tests you have already run. Share your unit tests, local builds, or any checks you performed to convince yourself it’s ready.

This gives the reviewer a solid starting point. They might also choose to run extra tests on their own hardware or setup, especially if their environment is different or closer to production. Suggesting a few scenarios they could try is always appreciated.

Make it easy for your peer to understand what they are looking at, why it matters, and how to reproduce your work if needed.

How to Review

A reviewer’s job is not to re‑implement, re-run tests, or take ownership of the code, but to ensure it can safely move forward. Here’s what we expect from a reviewer:

  • Fetch the branch and build it.
    This avoids the dreaded “it works on my machine” situation and proves that everything was properly pushed to the repository.

  • Run the module on its own.
    In modular programming, the module should compile independently and come with unit tests. Run those tests and check the PASS status.
    The goal is not to re‑test functionality in depth, but to make sure nothing external is missing (like a forgotten dynamic library or header).

  • Thoroughly read the requirements and the corresponding code.
    Take the time to really understand it: immerse yourself with what the code is meant to do. Every line should be clear in intent. If it’s not, that’s a sign that a comment or explanation is missing.

  • Use your general knowledge of the project to consider corner cases, but don’t overthink improbable scenarios that would never occur in the target environment.

  • Look for bugs, inconsistencies, and style issues.
    Check that the code follows project conventions, the QA manual if you have one, and is maintainable.

  • Assess whether the tests mentioned by the developer actually cover the critical parts.
    Build on the information provided, don’t repeat everything the developer has done.

  • Track your checks.
    If you run complementary tests or notice specific points, write them down in the tracking tool, the GitLab discussion thread, or whatever medium your team uses. A review must leave a clear trace of what was verified.

Avoid complacency at all costs.
A review is not just clicking “Accept” and please the developer. As a reviewer, you are responsible for your review and share responsibility in the final outcome.

Important: the reviewer must never directly modify the code. It’s not their work. They can and should suggest changes, but the implementation stays with the developer.

A reviewer should not nitpick trivialities or block progress without clear reasons. Be constructive, not destructive.

Pro TipRemember, a code review is a peer review: a joint effort between peers. Don’t underestimate the value of gathering people for a live review.

This can be done in person or over a video call, but make sure all participants have enough background on the component under review.

The requester should kick things off with a short presentation: what the module does, why certain decisions were made, and which areas deserve special attention.

This interactive format fosters real collaboration: questions get answered immediately, doubts are cleared, and reviewers can build on each other’s insights.

Years ago, such sessions were standard practice. They’ve faded in favor of asynchronous comments and quick approvals, but losing that habit is a shame.

A well‑run, live review often produces deeper understanding and better results than any solo click on an approval button.

Who Should Review

Pick someone who:

  • Is competent in the relevant technologies,
  • Is familiar with the subject,
  • Was not directly involved in developing the module.

But who exactly can that be? Here are good candidates we rely on at Embedded Expertise:

  • A fellow developer
    Someone working alongside you on similar modules or in the same codebase. They know the conventions, they know the constraints, and they can spot inconsistencies quickly. This is the most common and often the most efficient choice.

  • Your N+1: tech lead, architect, or equivalent
    This person is typically a reference for technical decisions and should absolutely be technically competent. However, there is a catch: a code review is not a management act; it is a technical step in the project flow. Their input should be purely technical, not hierarchical.

  • A junior fellow
    Surprising? Maybe. But asking a junior engineer to review your code is an excellent way to onboard them, spread project conventions and culture, and communicate technical details. Do this young colleague a favor: request a review from them. They will learn a lot, and their questions often reveal assumptions you didn’t realize you were making.

And don’t forget: there can be several reviewers.

Depending on the technicality of the module and the sensitivity of certain areas (protocols, cybersecurity, documentation, QA, etc.), you might benefit from multiple viewpoints. In fact, multiple reviewers are a must when one of them is a junior: pair their fresh eyes with an experienced reviewer to get the best of both worlds.

Shared Objectives: Why It Matters

Both requester and reviewer work towards a single outcome:

Reach a shared agreement that the branch or module can be safely merged or released.

Keeping that goal in mind helps everyone stay focused and pragmatic.

A review process is not just a formality; it is a vital part of any serious QA flow. It catches issues before integration, spreads knowledge across the team, and builds good communication habits.

In our experience, regular reviews consistently lead to better quality code and stronger teamwork.

Why an External Reviewer Might Be a Great Move

Sometimes, the best reviewer isn’t inside your team. Bringing in an independent expert can make a huge difference: fresh perspective, no internal politics, and no blind allegiance to past decisions.

At Embedded Expertise, we often provide this service:

  • We are embedded‑systems experts. We know the constraints, the toolchains, and the typical pitfalls.
  • We adapt quickly. We’ve seen many different software flows and can jump into your project with minimal friction. No handholding.
  • We are independent. We have no position to defend, no “office games.” Our only goal is to help you ship robust, maintainable code.

If your project could use that extra layer of confidence, consider involving us in your review flow. We’ll check all the boxes and help you raise the bar.