Currently not accepting freelance projects.

Lawrence Dass logo
Published on

Code Reviews: Lessons From My Experience

After years of building web applications, I have opened many pull requests and reviewed even more. Some of those reviews led to productive discussions and stronger designs. Others dragged on without much value. What I’ve learned is that the difference is rarely luck. It comes down to how we approach the review process.

For me, code reviews have never been just about catching bugs, even though that’s always a win. They’re one of the most powerful tools we have for staying aligned as a team, sharing knowledge, and keeping our codebase in good shape. Large systems are never built by one person, so reviews are how I make sure we’re all moving in the same direction.

Why I Believe Code Reviews Matter

Over time, I realized that reviews deliver benefits well beyond fixing typos or formatting:

  • Team alignment. When I review a teammate’s PR, I keep my mental model of the system fresh. When others review my code, I get feedback that sharpens my understanding.
  • Correctness. Reviews give me confidence that my changes actually solve the intended problem.
  • Design discussions. They open space for evaluating trade-offs and making better design decisions.
  • Bug prevention. Many issues are caught here instead of surfacing in production.
  • Performance improvements. A colleague often spots inefficiencies that I miss.
  • Knowledge sharing. Every review is a chance to teach and to learn.
  • Consistency. They help us apply conventions across the codebase.
  • Team cohesion. Reviews are a conversation that keep us connected as engineers.

What I Ask Myself Before Writing Code

I’ve found that reviews go better when I take a pause before diving into code. I ask myself:

  • Is this the right thing to work on right now? Priorities shift, and I want to make sure I’m not working ahead of alignment.
  • Does the team agree on this change? If not, a quick design discussion up front saves a lot of time.
  • Can I break this into smaller chunks? Small PRs are so much easier for others to understand.
  • How will I test this? I plan my testing approach before I even start coding.

These questions help me avoid wasted work and keep my PRs easier to reason about.

How I Write PR Descriptions

One of the most common sources of friction I’ve seen is vague or incomplete PR descriptions. Reviewers often have to work hard to get context, and that slows everything down.

To make things easier, I try to structure my descriptions around a few basics:

  • Problem: What issue I’m solving and why it matters.
  • Approach: The design I chose and any trade-offs.
  • Changes: A summary of the main updates.
  • Testing: Evidence that I validated it.
  • Risks: Any known impacts or concerns.

For example:

  • Vague: “Fix uninitialized memory bug.”
  • Clear: “Fix startup crash from uninitialized counter in Metrics [#54633].”

That second version, with a clear explanation, makes it easy for my teammates to review and merge with confidence.

How I Give Feedback as a Reviewer

When I first started, I often left comments pointing out flaws without much explanation. That created friction instead of clarity. Now, I frame feedback as questions or observations:

  • “How does this handle negative integers?”
  • “I don’t understand why class A depends on class B here.”
  • “It looks like this breaks an interface boundary. What’s the impact on users?”

I avoid focusing on the person and stick to the code. I also make sure to leave at least one positive comment to highlight what’s working well. Explaining the “why” behind my feedback has made reviews more collaborative and less combative.

The Code Review Pyramid (From Top to Bottom)

Before diving into the pyramid itself, I like to remind myself that review effectiveness depends on depth. Too often, we stop at surface-level comments.

This iceberg diagram highlights the issue. Style comments float at the surface, but the most impactful feedback lives deeper: in architecture and semantics.

Now, let’s look at the Code Review Pyramid itself:

When I first started reviewing code, I spent a lot of time commenting on style. Over the years, I’ve learned that style is the least important part of a review. The real value comes from digging deeper into architecture and semantics.

5. Code Style

Style is important for readability, but I now rely on tools to enforce it instead of wasting review time.
Example: I once spent half a review pointing out inconsistent indentation and missing semicolons. Later, we added a formatter to CI, and those discussions disappeared overnight. I also remember a PR where a teammate and I debated brace placement for almost a full day. In hindsight, it added no value.

4. Tests

Meaningful test coverage gives confidence to refactor safely.
Example: A teammate added a discount feature but forgot to test multiple coupons. That bug slipped into production and cost us hours we could have saved with one extra test.

3. Documentation

Missing documentation slows everyone down.
Example: We updated a schema but forgot to update the ERD diagram. Six months later, I wasted hours reconciling conflicting information.

2. Implementation Semantics

This is where correctness and robustness are decided.
Example: I once reviewed a payment service that worked fine in simple cases but failed under concurrency, leading to double charges. Catching it during review saved us a production fire.

1. API Semantics (Foundation)

APIs are contracts, and breaking them later is painful.
Example: A colleague introduced /getUserData while our standard was /users/{id}. We corrected it in review, but if it had shipped, we would’ve had to rewrite clients and docs later.

The key lesson for me: don’t stop at style. Drill down to semantics, architecture, and APIs—the parts that are hardest to change later.

What I Learned About PR Size

Another lesson I’ve learned is that size matters. Smaller pull requests are reviewed faster, with fewer mistakes slipping through.

From both research and experience:

  • The sweet spot is around 50 lines, which are merged about 40% faster and are less likely to be reverted.
  • Reviews in the 200–400 line range are still manageable but noticeably slower.
  • Anything beyond that tends to overwhelm reviewers, and engagement drops sharply.

In practice, I’ve found that aiming for PRs that can be reviewed in under an hour works best. It keeps the feedback sharp and avoids fatigue.

How I Use Automation

I rely heavily on tools for linting, formatting, and static analysis. They catch the small stuff so that reviews can focus on design, logic, and business value. This shift has made our reviews far more productive.

How I Keep Reviews Manageable

From experience, a few principles stand out:

  • Keep PRs small (ideally 50–400 lines).
  • Align on big design changes before coding starts.
  • Always review my own code before asking for feedback.
  • Keep review sessions under an hour, otherwise quality drops.

Final Reflection

Early in my career, I thought reviews were just about catching mistakes. Now I see them as one of the best ways for a team to align, share knowledge, and improve design.

By preparing carefully, writing clear PRs, giving constructive feedback, keeping changes small, and focusing on what matters most, I’ve turned reviews into one of the most rewarding parts of my development process.