overwatering.org

blog

about

In a previous life, at an R&D engineering company, there was a comprehensive culture of code review. I saw three different approaches across three teams while I was there.

The first team was a mix of embedded systems and embedded applications. This team had a very strict, very rigourous code review process. Every task had an associated review. All reviews were pre-commit (completed before merging work into trunk.) Reviews typically had two reviewers. Reviewers were assigned by the tech lead, based on a combination of appropriateness and load. All reviews were conducted using a dedicated newsgroup (remember those?) and all comments were publicly visible and recorded. I don’t remember anyone ever looking back through old reviews; though I would skim reviews in related areas, even if I wasn’t involved. I was working on an embedded application, so my experience was a little different. The systems people had roughly the following approach:

  1. Write the headers (this was C and C++ code) — submit for review.
  2. Implement the changes — submit for review.
  3. Write the tests — submit for review.

While performing the first and second rounds of review, reviewers were to regard the code as untested and look for things that tests would typically pick up.

Us apps cowboys folded the final two reviews together. Offered this unfathomable freedom, some of us even wrote the tests before the implementation.

There was no explicit tooling support apart from the newsgroup. Each review would start a separate thread. The author would list the files that had been changed in the review. Comments would be made by replying in the group, quoting the code at issue. The team used AccuRev for source code control. Its unique feature is per-developer branches for everything, while still being centralised. Thus, the changes would be committed locally, where the reviewers could get at it, but not merged. Even though the reviewers could see the code, it was inconvenient to compile. Therefore, the author would occasionally be asked to compile the code by a reviewer: to see if a theory about the app panned out.

The reviews were extremely high quality. Reviews typically focused on low-level bugs and implications of the implementation. Style was known, followed and thus rarely commented on. Where two approaches would have worked reviewers generally went along with the author’s choice. Occasionally there were threads discussing reviews. A lot of this behaviour would probably be a function of the culture of the team. The systems people were generally older and from an electronics engineering background. Aspects of software engineering style didn’t really interest them. Using a variable amount of memory did. Our apps team did tend to have longer threads.

The second team was writing a Windows client/server application, also in C++. And using the language to its fullest potential. For a number of years this team had no code review or pairing, beyond a tech lead who tried to read every commit. After I joined this team I was asked to start a code review process. On this team we went for tooling support and bought Code Collaborator by SmartBear. We looked at Crucible. This was not long after it was released and it wasn’t anyway near as feature rich as Code Collaborator. On this team all reviews were post-commit. Again, there were typically two reviewers for every review. We implemented some extra support around this process: a script would automatically upload every commit to CoCo and create a review for it. Reviews could be merged before being sent to reviewers, and if you put the right comments in your check-in message the new commit would automatically be added to the right review. Defects raised in the review could also be marked fixed in this fashion. CoCo was a web application, and all reviews happened in browser windows. A large review would involve opening many, many windows with quite a bit of inconvenience. Unfortunately, this didn’t really drive a change in behaviour (smaller, more frequent committing) because the main offenders typically didn’t feel the pain.

The quality of these reviews did not tend to be as high. The culture of this team meant that reviews focused a lot on the stylistic use of C++. Early reviews while working on a task were more valuable than later reviews as these would tend to look more into design considerations. One common pattern noticed was that reviews would tend to bank up. As it was all post-commit, developers would commit, create the review and move on. The reviewers would ignore the reviews until later. All reviews were supposed to be finished by the end of the iteration (every six weeks) so every six weeks there was a lot of code to review, some of it quite old. There was an attempt to alleviate this with a ‘review spike’ half-way through the iteraton. Despite all this, the reviews were seen as very useful.

Finally, another much smaller team was spun out. This team had week long iterations, but still post-commit reviews of every story. A story wasn’t done until all reviews associated with it were closed. Because this team was three developers, for complex work everyone reviewed everyone else’s work. The process and tooling was otherwise identical to the previous team. Reviews did not bank up: hard to in a week. As we all shared a room of our own, long discussions usually broke out of the review tool and into face-to-face discussions. This would sometimes be in-convenient: comments were captured by the reviewer after the discussion with ‘As discussed.’ Most of the time the author could remember what was discussed.

I think this is the apotheosis for reviewing in my experience. The good things about code reviews:

  • More people participate: more knowledge across the code.
  • Honestly, sometimes a little quiet and focus is actually a good environment to write code in.
  • The analysis of the implementation and implication can happen at the reviewers pace with some distance from the heady rush of first implementation. This reflection was useful.

The bad:

  • “I’ve finished with that and moved on, I’m not in the mood to go back to it.” Not a strong argument, but a common and, to some degree understandable, attitude.
  • “I’ve got to say something. I know I’ll comment on the variable names.” It was very easy for reviews to fall into the trivial, when they should have closed with no issues.
  • The biggest issue: a review would happen with space from the implementation. Upon reading someone’s code it is always possible to suggest an alternative. This can lead to a long discussion that doesn’t matter. This is code reviewing’s big disadvantage over pairing.

In the end, I’d do it again. Given a team of competent, passionate engineers who won’t or can’t (distributed; old-school) pair I’d happily suggest code reviewing. In the end, code reviewing was (as I understand it) the original motivation for pairing and depending on the team, reviews can be better than pairing. For example, Reviewing creates better cross-team understanding of the code base. On my first project after these three teams, when I came to the realisation that I didn’t have at least a loose understanding of every single line of code, I felt quite a shock. But code reviews on the wrong team can instituionalise pedantry and analysis paralysis.