Question

I am working at a robotics startup on a path coverage team and after submitting a pull request, my code gets reviewed.

My teammate, who has been on the team for more than a year, has made some comments to my code that suggest I do a lot more work than I believe to be necessary. No, I am not a lazy developer. I love elegant code that has good comments, variable names, indentation and handles cases properly. However, he has a different type of organization in mind that I don't agree with.

I'll provide an example:

I had spent a day writing test cases for a change to a transition finding algorithm that I made. He had suggested that I handle an obscure case that is extremely unlikely to happen--in fact I'm not sure it is even possible for it to occur. The code that I made already works in all of our original test cases and some new ones that I found. The code that I made already passes our 300+ simulations that are run nightly. However, to handle this obscure case would take me 13 hours that could better be spent trying to improve the performance of the robot. To be clear, the previous algorithm that we had been using up until now also did not handle this obscure case and not once, in the 40k reports that have been generated, has it ever occurred. We're a startup and need to develop the product.

I have never had a code review before and I'm not sure if I'm being too argumentative; should I just be quiet and do what he says? I decided to keep my head down and just make the change even though I strongly disagree that it was a good use of time.


I respect my co-worker and I acknowledge him as an intelligent programmer. I just disagree with him on a point and don't know how to handle disagreement in a code review.

I feel that the answer I chose meets this criteria of explaining how a junior developer can handle disagreement in a code review.

Was it helpful?

Solution

an obscure case that is extremely unlikely to happen--in fact I'm not sure it is even possible to occur

Not having untested behaviors in code can be very important. If a piece of code is run e.g. 50 times a second, a one in a million chance will happen approximately every 5.5 hours of runtime. (In your case, the odds seem lower.)

You may talk about the priorities with your manager (or whoever is a more senior person in charge for the unit you work in). You will better understand whether e.g. working on code performance or code being bullet-proof is the top priority, and how improbable that corner case may be. Your reviewer may also have a skewed idea of priorities. Having talked with the person in charge, you'll have an easier time (dis)agreeing with your reviewer suggestions, and will have something to refer to.

It is always a good idea to have more than one reviewer. If your code is only reviewed by one colleague, ask someone else who knows that code, or the codebase in general, to take a look. A second opinion, again, will help you to more easily (dis)agree with the reviewer's suggestions.

Having a number of recurring comments during several code reviews usually points to a bigger thing not being clearly communicated, and the same issues crop up again and again. Try to find out that bigger thing, and discuss it directly with the reviewer. Ask enough why questions. It helped me a lot when I started the practice of code reviews.

OTHER TIPS

I can give you an example of a corner case that could never occur that caused a disaster.

When the Ariane 4 was being developed the values from the lateral accelerometers were scaled to fit into a 16-bit signed integer and because the maximum possible output of the accelerometers, when scaled, could never exceed exceed 32767 and the minimum could never fall below -32768 there was “no need for the overhead of range checking”. In general all inputs are supposed to be range checked before any conversion, but in this case that would be trying to catch an impossible corner case.

Several years later the Ariane 5 was being developed and the code for scaling the lateral accelerometers was reused with minimal testing as it was “proven in use”. Unfortunately the larger rocket could expect larger lateral accelerations so the accelerometers were upgraded and could produce larger 64-bit float values.

These larger values "wrapped" in the conversion code, remember no range checking, and the results on the first launch in 1996 weren't good. It cost the company millions and caused a major hiatus in the program.

Enter image description here

The point that I am trying to make is that you should not ignore test cases as never happening, extremely unlikely, etc.: the standards that they were coding for called for range checking of all external input values. If that had been tested for and handled then the disaster might have been averted.

Note that in Ariane 4 this was not a bug, (as everything worked well for every possible value) - it was a failure to follow standards. When the code was reused in a different scenario it failed catastrophically, while if the range of values had been clipped it would likely have failed gracefully, and the existence of a test case for this might have triggered a review of the values. It is also worth noting that, while the coders and testers came in for some criticism from the investigators following the explosion, the management, QA & leadership were left with the majority of the blame.

Clarification

While not all software is safety critical, nor so spectacular when it fails, my intention was to highlight that "impossible" tests can still have value. This is the most dramatic case that I know of but robotics can also produce some disastrous outcomes.

Personally, I would say that once someone has highlighted a corner case to the test team a test should be put in place to check it. The implementation team lead or project manager may decide to not try to address any failures found but should be aware that any shortcomings exist. Alternatively, if the testing is too complex, or expensive, to implement, an issue can be raised in whatever tracker is in use &/or the risk register to make it clear that this is an untested case — that it may need to be addressed before a change of use or prevent an inappropriate use.

Since it wasn't handled before, it's out of scope for your effort. You or your colleague can ask your manager if it's worth the effort to cover this case.

With complex algorithms, it's very difficult to prove you have thought of every test case that will come up in the real world. When you intentionally leave a test case broken because it won't come up in the real world, you are potentially leaving other test cases broken that you haven't even thought of yet.

The other effect that often happens is when you handle additional test cases, your algorithm necessarily becomes more general, and because of that you find ways to simplify it and make it more robust for all your test cases. This saves you time in maintenance and troubleshooting down the road.

Also, there are umpteen cases of "this should never happen" bugs happening in the wild. That's because your algorithm might not change, but its inputs might change years down the road when no one remembers about this one unhandled use case. That's why experienced developers handle these sorts of things when they are fresh in their minds. It comes back to bite you later if you don't.

This is not a technical question but a business strategy decision. You notice the suggested fix is for a very obscure case which will almost never happen. Here it makes a big difference if you are programming a toy or if you are programming say medical equipment or an armed drone. The consequences of a rare malfunction will be very different.

When doing code reviews you should apply a basic understanding of the business priorities when deciding how much to invest in handling rare cases. If you disagree with your colleague in your interpretation of the priorities of the business you may want to have a discussion with someone on the business side of things.

Code reviews are not purely about code correctness. In reality, that is quite far down the list of benefits, behind knowledge sharing and being a slow-but-steady process towards creating a team style/design consensus.

As you are experiencing, what counts as "correct" is often debatable, and everyone has their own angle on what that is. In my experience, limited time and attention can also make code reviews highly inconsistent - the same issue might be picked up or not depending on different developers and at different times by the same developer. Reviewers in the mindset of "what would improve this code?" will often suggest changes that they would not add to their own efforts naturally.

As an experienced coder (more than 15 years as a developer), I am often reviewed by coders with fewer years of experience than me. Sometimes they ask me for changes that I mildly disagree with, or think unimportant. However, I still make those changes. I fight battles over changes only when I feel the end result would cause a weakness in the product, where the time cost is very high, or where a "correct" point of view can be made objective (e.g. the change being asked for won't work in the language we are using, or a benchmark shows a claimed performance improvement is not one).

So I suggest pick your battles carefully. Two days coding a test case that you think is not necessary is probably not worth the time/effort to fight. If you are using a review tool, such as GitHub pull requests, then maybe comment there about cost/benefit that you perceive, in order to note your objection, but agree to still do the work. This counts as a gentle push back, so the reviewer knows they are hitting a boundary, and more importantly include your rationale so cases like this can be escalated if they get into deadlock. You want to avoid escalating written disagreements - you don't want to have an Internet forum style argument over work differences - so it may be helpful to talk the issue through first and record a fair summary of the outcome of the discussion, even if you still agree to disagree about the need for the work (it is entirely possible a short friendly discussion will decide for you both anyway).

After the event, this is a good topic for sprint review or development team planning meetings, etc. Present it as neutrally as you can e.g. "In code review, Developer A identified this additional test case, it took extra two days to write, does the team think the extra coverage was justified at this cost?" - this approach works much better if you actually do the work, as it shows you in a positive light; you have done the work, and just want to poll the team for risk aversion vs. feature development.

I'd advise you to at least assert against the obscure case. That way, not only do future developers see that you actively decided against the case, but with a good failure handling, that should already be in place, this would also catch surprises.

And then, make a test case that asserts that failure. This way, the behaviour is better documented and will show up in unit tests.

This answer obviously assumes that your judgement of the even being "extremely unlikely if even possible" is correct and we can not judge that. But if it is, and your colleague agrees, then an explicit assertion against the event should be a satisfactory solution for both of you.

Since you seem to be new there, there is only one thing you can do - check with the team leader (or project leader). 13 hours is a business decision; for some firms/teams, a lot; for some, nothing. It's not your decision, not yet.

If the lead says "cover that case", fine; if he says "nah, screw it", fine - his decision, his responsibility.

As for code reviews in general, relax about it. Having a task returned to you once or twice is perfectly normal.

One thing I don't think I've seen addressed in kind, although it was kind of brought up in @SteveBarnes answer:

What are the repercussions of a failure?

In some fields a failure is an error on a web page. A PC blue screens and reboots.

In other fields it's life or death - self driving car locks up. Medical pace maker stops working. Or in Steve's answer: stuff blows up resulting in the loss of millions of dollars.

There is a world of difference in those extremes.

Whether or not 13 hours to cover a "failure" is worth ultimately shouldn't be up to you. It should be up to management and owners. They should have a feel for the greater picture.

You should be able to give a good guess at what is going to be worth it. Will your robot simply slow down or stop? Degraded performance? Or will a failure of the robot cause monetary damage? Loss of life?

The answer to THAT question should drive the answer to "is it worth 13 hours of the companies time". Notice: I said the Companies time. They pay the bills and ultimately decide what's worth it. Your management should have the final say either way.

The best way to handle disagreement is the same, regardless of if you're a junior developer or a senior developer, or even a CEO.

Act like Columbo.

If you've never watched any Columbo, it was a pretty fantastic show. Columbo was this very unassuming character - most people thought that he was a little bit crazy and not worth worrying about. But by appearing humble, and just asking people to explain he was able to get his man.

I think it's also related to the Socratic method.


In general you want to ask questions of yourself and others to make sure that you're making the right choices. Not from a position of, "I'm right, you're wrong," but from a position of honest discovery. Or at least as best as you can.

In your case you have two ideas here, but they have fundamentally the same goal: to make the code better.

You're under the impression that skimping on code coverage for a potentially improbable (impossible?) case in favor of development of other features is the best way to do that.

Your coworker is under the impression that being more careful about the corner cases is more valuable.

What do they see that you don't see? What do you see that they don't see? As a junior developer you're actually in a great position because you naturally should be asking questions. In another answer someone mentions how surprisingly likely a corner case is. So you can start out with, "Help me understand - I was under the impression that X, Y, and Z - what am I missing? Why will the widget framboozle? I was under the impression that it would fintuble under cromulent circumstances. Will the swizzle stick actually embiggen the ANZA brushes?"

As you question your assumptions and your findings you will dig down, uncover biases, and eventually figure out what the correct course of action is.

Start with the assumption that everyone on your team is perfectly rational and they have the best interests of the team and the product in mind, just like you. If they're doing something that doesn't make sense, then you need to figure out what you don't know that they do, or what you know that they don't.

Maybe talk to the person who is responsible for prioritizing work? In startup could be CTO or product owner. He could help to find if this extra work is required and why. Also you could bring your worries during daily standups (if you have them).

If there is no clear responsibility (e.g. product owner) for planing work, try to talk to people around you. Later it could become an issue that everyone is pushing product to opposite direction.

13 hours is not such a big deal, I'd just do it. Remember you're getting paid for it. Just chalk it up as "job security". Also it's best to keep good karma among the team. Now if it was something that'd take you a week or more, then you could involve your manager and ask him if that's the best use of your time, especially if you don't agree with it.

However, you sound like you need leverage in your group. Here's how you get leverage: ask for forgiveness don't ask for permission. Add stuff to the program as you see fit (within scope of'course, ie make sure it completely solves the problem the boss wants..), and tell the manager or your colleagues after the fact. Don't ask them: "Is it ok if I add feature X". Rather, just add features that you personally want in the program. If they get upset at a new feature or don't agree with, be ok to remove it. If they like it, keep it.

In addition whenever they ask you to do something, go the "extra mile" and add alot of things they forgot to mention or things that would work better than what they said. But don't ask them if it's "ok" to go the extra mile. Just do it and casually tell them about it after it's done. What you're doing is training them..

What happens will be your manager will peg you as a "go-getter" and will start putting his trust in you, and your colleagues will start seeing you as the lead bc you're starting to own the program. And then when things like what you mention happen in the future you'll have more say because you're essentially the star of the team and teammates will back down if you don't agree with them.

Code review serves several purposes. One you are obviously aware of is, "Is this code fit for purpose?" In other words, is it functionally correct; is it adequately tested; are the non-obvious parts appropriately commented; does it conform to the project's conventions?

Another part of code review is sharing of knowledge about the system. It's an opportunity for both the author and the reviewer to learn about the changed code and how it interacts with the rest of the system.

A third aspect is that it can provide a review of problems that existed before any changes are made. Quite often, when reviewing someone else's changes, I'll spot something I missed in a previous iteration (quite often something of my own). A statement like, "Here's an opportunity to make this more robust than it was," is not a criticism, and don't take it as one!

My current team regards code review not just as a gateway or hurdle that the code must pass unscathed before commit, but primarily as an opportunity for a somewhat structured discussion of a particular area of functionality. It's one of the most productive occasions for information sharing. (And that's a good reason to share the reviewing around the team, rather than always the same reviewer).

If you perceive code reviews as a confrontational activity, that's a self-fulfilling prophecy. If instead you consider them as the most collaborative part of your work, they will stimulate continual improvements to your product and to how you work together. It helps if a review can be clear about the relative priorities of its suggestions - there's a mile of difference between "I'd like a helpful comment here" and "This breaks if x is ever negative", for example.

Having made a whole lot of general statements above, how does that apply to your specific situation? I hope it's now obvious that my advice is to respond to the review with open questions, and to negotiate what approach has the most value. For your example case where an extra test is suggested, then something like, "Yes, we can test for that; I estimate it will take <time> to implement. Do you think the benefit is worth it? And is there something else we can do to guarantee the test isn't necessary?"


One thing that does strike me when I read your question: if it takes two days' effort to write a new test case, then either your new test is a very different scenario to your existing tests (in which case it probably has a lot of value) or you have identified a need for improved code re-use in the test suite.


Finally, as a general comment on the value of code reviews (and as a pithy summary of what I've said above), I like this statement, in Maintainers Don't Scale by Daniel Vetter:

At least for me, review isn’t just about ensuring good code quality, but also about diffusing knowledge and improving understanding. At first there’s maybe one person, the author (and that’s not a given), understanding the code. After good review there should be at least two people who fully understand it, including corner cases.

Code can ALWAYS be better.

If you're in a code review and you don't see anything that might be better or a unit test that might catch a bug, it's not the code that's perfect, but the reviewer that isn't doing their job. Whether you chose to mention the improvement is a personal choice. But almost any time your team does a code review there should be things that someone notices that could be better or everyone likely just wasted their time.

That said, whether you act on the comments or not is up to your team. If your changes fix the issue or adds enough value without change that your team accepts them then merge them in and log their comments into the backlog for someone to address later. If the team finds your changes add more risk or complexity than value then you have to resolve the issues accordingly.

Just remember that any code has at least one more edge case that could be tested and could use at least one more refactoring. This is why code reviews are best done as a group with everyone looking at the same code at the same time. So that at the end everyone can come to a consensus on whether or not the code in review is acceptable (as it is) and adds enough value to merge into the community base or if certain things must be done before there is enough value to merge.

Since you are asking this question I assume you're not actually doing "code reviews", but instead creating a pull request or other submission mechanism for others to comment in a non-deterministic way. So now you're into a management issue and a definition of done. I'd guess your management is indecisive and doesn't actually understand the process and purpose of code reviews and likely doesn't have a "definition of done" (DOD). Because if they did your DOD would explicitly answer this question and you wouldn't have to come here and asked.

How do you fix it? Well, ask you manager to give you a DOD and tell you if you have to always implement all comments. If the person in question is your manager then the answer is self-evident.

This question is not about about the virtues of defensive programming, the dangers of corner cases, or the catastrophic risks of bugs in physical products. In fact it's not even really about software engineering at all.

What it's really about is how a junior practitioner handles an instruction from a senior practitioner when the junior can't agree or appreciate it.

There are two things you need to appreciate about being a junior developer. Firstly, it means that whilst it's possible that you're in the right, and him in the wrong, it is - on balance of probabilities - not likely. If your coworker is making a suggestion that you can't see the value of, you need to seriously entertain the possibility that you're just not experienced enough to understand it. I don't get that sense from this post.

The second thing to appreciate is that your senior partner is so-called because he has more responsibility. If a junior breaks something important, they're not going to be in trouble if they followed instructions. If a senior allowed them to break it, however - by not raising issues in code review, for instance - they would quite rightly be in a lot of bother.

Ultimately, it's an implicit requirement of your job that you observe instructions from those the business trust to lead their projects. Are you generally unable to defer to seniors when there's good reason to value their experience? Do you intend to follow no instruction you cannot understand? Do you think the world should stop until you are convinced? These values are incompatible with working in a team.

A final point. Think back to the projects you wrote six months ago. Think of the projects you wrote at university. See how bad they now seem - all the bugs and upside-down design, the blind spots and the misguided abstractions? What if I told you that in six months' time you'll count the very same flaws in work you're doing today? Does that help demonstrate how many blind spots are in your current approach? Because that is the difference that experience makes.

constantly makes comments to my code that suggest me to do a lot more work than is necessary.

You can give recommendations, but ultimately it's not your role to decide what's necessary. It's your job to implement what management or (in this case your reviewer) decides is necessary. If you disagree with what is necessary too much or too strongly, you will likely find yourself out of a job. Consequently it's part of your professionalism to come to terms with this and be at peace with it.

He had suggested that I handle an obscure case that is extremely unlikely to happen

There are other great answers here that show how even non-bugs (ie. something that can provably not fail ever) still should be re-worked sometimes. (e.g. in the case of building in future safety of the product, following standards etc.) Part of the role of a great developer is to have as much confidence as possible that your code will be robust in every conceivable situation every time, and future-proofed as well, not just working under tested situations under present conditions most of the time

Suggestions to code reviewers to increase the business usefulness of your code review (you as OP should propose such a change):

  • Mark down your comments by type. "Critical"/"Must-Do"/"Optional"/"Suggested improvements"/"nice to have"/"I'm musing".

    If this seems too CDO/anal/complicated, at least use 2 levels: "Must fix to pass review and be allowed to merge your change" / "All others".

Suggestions for handling code review suggestions that seem less critical to make:

  • Create an open ticket in your ticketing system of choice (your team uses one, hopefully?), tracking the suggestion

  • Put the ticket # as a response comment to the code review item if your process allows responses to comments like Fisheye or email reviews do.

  • Reach out to reviewer and explicitly ask if that item is of the "must fix or won't be merged/released" type.

    • If the answer is "Yes", but you disagree, let the person responsible for the project management (PM, your manager, etc...) make a decision - present the disagreement fully and honestly. This isn't about which of you is "right" but about what's better for the project, so PM/manager's job.

Now, treat that ticket as any other development request.

  1. If it's decided to be urgent after escalation, treat it as any urgent dev request. Deprioretize other work and work on this.

  2. Otherwise, work on it according to priority it was assigned and its ROI (which can differ based on your business line as explained in other answers).

You should not escalate this to the management.

In most companies the management guy will always chose to not write that extra test, to not spend time marginally improving the code quality, to not lose time refactoring.

In a lot of cases the code quality depends on the unwritten rules in the development team and on the extra effort that the programmers put in.

You are a junior developer and this is your first code review. You must accept the advice and do the work. You can only improve the workflow and the rules in your team if you first know and respect them for a while so that you can understand why they are there. Otherwise you'll be that new guy who doesn't respect the rules and becomes the lone wolf of the team.

You are new to the team, follow the advices you get for a while, find out why they are there, don't bring the first advice that you get into question in the scrum meeting. The real business priorities will be evident to you after a while without asking (and may be not what the management guy will tell you face to face).

It is very important you make code that satisfies what your lead / management request. Any other detail would just be a "nice to have". If you are an expert (read that: "if you are not a junior developer") in your field, then you are "eligible" to address minor issues you find along the way without consulting your leaders every time.

If you think something is wrong and you are relatively expert in your field then chances are high you are right.

Here are some statements that could be useful to you:

  • I was requested to do X, code reviewer suggests also doing Y, should I do it or should I move on to more important stuff?
  • You are suggesting Y to me, so can you figure out at least one test case that would catch that behaviour so I can test it? I believe that code won't be called.
  • Maybe shouldn't we develop a safe fallback for uncovered cases first? So we catch them early and fix on the go so we can focus on more important stuff.
  • OK, while I am implementing Y, could you be so kind to write some test cases for it so we get that thing done once and for all?
  • I was requested to do X, and I think I could do Y unless there are other priorities. Next time why don't you file a feature request instead of putting it as a review comment on my code? At least we can hear further opinions from other team members on that feature before implementing it (generally any important stuff should be a feature and should be handled by more than one person; usually code review should be about reviewing code and solutions approaches; the rest is bug-fixing and features).

Do you seriously think that the attitude of your reviewer is damaging the project or do you think he's right most times (except maybe sometimes he just makes minor errors in evaluation and exaggerates that)?

To me it sounds more like he's writing stuff that does not belong in a code review, which is bad practice because it makes it harder for everyone to track stuff. However I don't know what other review comments he made, so I can't say anything on other cases.

In general try to avoid both of the following:

  • You are not doing what has been requested
  • You make your reviewer feel dumb

If he's actually reviewing your code, it's because management trust him more than you.

When there's disagreement during code review about scope:

  1. Document the scope that's actually covered by the code. Nobody likes nasty surprises.
  2. Realize that scope is a business decision. Scope should already be known by the time you start working on a feature, but if it isn't you can always ask for clarification later.

If the code reviewer is the one making the business decisions, he can change the scope at any time, even during code review, but he isn't doing so in his role as code reviewer.

If you can't prove the edge case to be impossible, you must assume it to be possible. If it is possible, then it is inevitable that it will eventually occur, and sooner rather than later. If the edge case hasn't occurred in testing, that may be a hint that test coverage is incomplete.

  1. Accept the feedback.
  2. Before making any changes to your code, try your best to build a test for the edge case and see if you can get a test failure (proof that the issue exists). If it's impossible to create a such a test case and get a test failure, then you may be able to conclude that the edge case is actually impossible (although I'd be hesitant to draw such a conclusion).
  3. If you can get a test failure, apply the appropriate code change to pass the test.

For the good of the product, you probably want to be able to produce the edge case and induce a failure, so that you can apply the fix and have confidence that a potential problem has been averted.

The whole point of code reviews is to put additional eyes on the code. None of us is immune from mistakes or oversights. It's all too common to look at a piece of code many times and not notice an obvious error, where a fresh pair of eyes can pick it up immediately.

As you said, you've implemented a new algorithm. It would be unwise to draw conclusions about the behavior of your new algorithm based upon the behavior of or observations about its predecessor.

There are code reviewers who know what they are doing, and if they say something needs to be changed, then it needs to be changed, and when they say something needs to be tested, then it needs to be tested.

There are code reviewers who need to justify their own existence by creating useless work for others.

Which one is which is for you to decide, and how to handle the second kind is more a question for workplace.stackexchange.

If you use scrum, then the question is whether your work does what it is supposed to do (apparently it does), and you can put handling the extremely rare and maybe impossible case on the backlog, where it will be prioritised, and if it goes into a sprint, then your reviewer can feel free to pick it up and do the 13 hours work. If you do job X and because you do job X you realise job Y also needs doing, then job Y doesn't become part of job X, it is its own independent job.

Licensed under: CC-BY-SA with attribution
scroll top