Question

Our team has recently started conducting code reviews against each checkin.

As the team lead I'm trying to find a balance between providing too many suggestions, annoying developers and decreasing the teams output, and letting go of code I would have written differently.

Is there any evidence, study or guidance from well known sources which suggests a helpful approach?

Was it helpful?

Solution

Keep in mind the overarching goals: in the end, only working software matters

Peer review and check-in code review have the goal to improve quality. But there's nothing worse for quality than a demotivated developper. As a team lead, your role is not to endorse the code as something that you could have written yourself, but to promote teamwork and ensure the overall result.

Set a clear scope for your review

Keep in mind: it's not your code, but the team's code. So, focus on things that could lead to wrong results.

  • Don't challenge the way your developer has chosen to meet the requirements, unless you're certain it won't work (but it should already have failed the tests, no?)

  • Don't challenge for poor performance unless there is a measure that shows where the issue is. Premature optimization is the root of all evil ;-)

  • If you find a design or software structure to challenge, then ask yourself why it wasn't caught upfront! Already written code is expensive to rewrite. If this happen, it's time to review your software development and teamwork practices at least as much as the code.

  • address compliance with established coding standards. It's the most annoying topic to discuss for both the reviewer and the reviewed. When everyone agreed to use capitalized class names in your team and one guy has a class without, is it a matter of taste ? Or of teamwork effectiveness and risk ?

By the way, if you feel a coding standard is not worth to be discussed, remove it form your standards and don't waste time and energy on it.

Develop leadership: the human side of the review

As a team leader, you may find here an opportunity to develop yourself and your team, beyond the formality of a quality control:

  • Code reviews are much more pleasant for everyone, if there's a true exchange. Give your developper the opportunity also to show their skills (and yes, perhaps you could learn something new).
  • Have an open ear to criticism on design choices, or existing standards. Sometimes people can cope better with such frustrations, just because they could talk about it.
  • Coach your juniors: don't hesitate to give advices, or refactoring orientations for the next iteration. Don't do this with seniors: in another world your respective role could have switched.

Take advantage of other practices

There are a couple of things that you can avoid in code-review:

  • Use of a static code analyzer in your build chain can automate the finding of common bugs, or non portable constructs, long before the peer review. Some tools can even check for some of your coding standards.
  • If you have standards about code layout, use a pre-commit pretty-print or similar formatters to format the code as required automatically. Never spend time on something a software could do for you better and without discussing :-)
  • Finally, quality is not only ensured by review, but also by tests. If you don't use TDD yet, give it a thought independently of code review.

OTHER TIPS

As developers we are, the mindset should remain always open and sceptical at the same time.

Open, because we don't know when a developer may surprise us, and sceptical about our own ideas because we often forget that in software engineery there's not a single correct way to implement a solution. The rationale behind our solutions could make sense for us and make none for others. Behind a code smell there could be a great idea. Maybe, the developer didn't find the way to express it properly.

Due to we (humans) are terrible at communicating, don't make false assumptions, be willing to ask to the code owner about the code you are reviewing. If he/she failed at coding the idea under the company' standars, as lead developer be willing to guide him/her too.

Here the subjective approach. The objective approach, IMO, is very well explained in this question.

In addition to the link above, the set of objectives to be achived (maintainability, readability, portability, high cohesion, loose coupling, etc.) are not necessarily the Ten Commandments. You (the team) should be able to adapt these objectives to a point where the balance between quality and productivity makes the job confortable and "habitable for developers".

I would suggest the usage of static code analysis tools for measuring the progress of the quality according to these objectives. Tools like SonarQube provide us with Quality Gates and Quality Profiles that can be customized according to our priorities. It also provides us with a issue tracker, where developers can be targeted with issues related to code smell, bugs, doubtful practices, etc.

These kind of tools can be a good starting point, but as I said keep yourself sceptic. You could find some rules in Sonar to be meaningless for you, so feel free to ignore them or remove them from your quality profile.

Meddling with developer's code for cosmetic changes will demotivate the developer but in absolute circumstances it has to be done. The lead has to find the balance between providing useful code review and learning to let go of minor shortcomings. https://blog.smartbear.com/sqc/for-the-new-team-lead-the-first-six-things-you-should-know/

Some things to keep in mind:

  1. It is about psychology as much as about technology, so there is no golden rule here.
  2. What is about people is not just about knowledge but also about culture and position in hierarchy.
  3. If this is a "long" game (stable and established company), well integrated team where people trust each other usually has higher value than the code under review. It should not be used to force things that are not absolutely required to proceed. Devil is in the details...
  4. If this is a "short" game (side project, R&D, lots of freelancers in a group) costs of CR often overcome adventages comming from doing them. And attitude should be "just make it wok"

There are only two things that matter.

  1. Are there automated tests for all requirements?
  2. Do they all pass?

Everything else is cosmetic and should be argued about over beer rather than enforced as a gate.

If you follow this view, then it limits you to a narrow area of focus.

Are the requirements good? Which ideally you should know before starting the task, ie performance, security etc should all be in there

Are the tests good tests? Any missed edge cases, are they testing the requirement well etc. Ultimately: Can you write a test which is for an existing requirement but which will fail?

Licensed under: CC-BY-SA with attribution
scroll top