Frage

At the end of a 2 week sprint and a task has a code review, in the review we discover a function that works, is readable, but it's quite long and has a few code smells. Easy refactor job.

Otherwise the task fits the definition of done.

We have two choices.

  • Fail the code review, so that the ticket doesn't close in this sprint, and we take a little hit on morale, because we cannot pass off the ticket.
  • The refactor is a small piece of work, and would get done in the next sprint (or even before it starts) as a tiny, half point story.

My question is: are there any inherent problems or considerations with raising a ticket off of the back of a review, instead of failing it?

The resources I can find and have read detail code reviews as 100% or nothing, usually, but I find that is usually not realistic.

War es hilfreich?

Lösung

are there any inherent problems or considerations with raising a ticket off of the back of a review, instead of failing it?

Not inherently. For example, the implementation of the current change may have unearthed a problem which was already there, but wasn't known/apparent until now. Failing the ticket would be unfair as you'd fail it for something unrelated to the actually described task.

in the review we discover a function

However, I surmise that the function here is something that was added by the current change. In this case, the ticket should be failed as the code did not pass the smell test.

Where would you draw the line, if not where you've already drawn it? You clearly don't think this code is sufficiently clean to stay in the codebase in its current form; so why would you then consider giving the ticket a pass?

Fail the code review, so that the ticket doesn't close in this sprint, and we take a little hit on morale, because we cannot pass off the ticket.

It seems to me like you're indirectly arguing that you are trying to give this ticket a pass to benefit team morale, rather than benefit the quality of the codebase.

If that is the case, then you've got your priorities mixed. The standard of clean code should not be altered simply because it makes the team happier. The correctness and cleanliness of code does not hinge on the team's mood.

The refactor is a small piece of work, and would get done in the next sprint (or even before it starts) as a tiny, half point story.

If the implementation of the original ticket caused the code smell, then it should be addressed in the original ticket. You should only be creating a new ticket if the code smell cannot be directly attributed to the original ticket (for example, a "straw that broke the camel's back" scenario).

The resources I can find and have read detail code reviews as 100% or nothing, usually, but I find that is usually not realistic.

Pass/fail is inherently a binary state, which is inherently all or nothing.

What you're referring to here, I think, is more that you interpret code reviews as requiring perfect code or otherwise failing it, and that is not the case.

The code shouldn't be immaculate, it should simply comply with the reasonable standard of cleanliness that your team/company employs. Adherence to that standard is a binary choice: it adheres (pass) or it doesn't (fail).

Based on your description of the issue, it's clear that you don't think that this adheres to the expected code standard, and thus it should not be passed for ulterior reasons such as team morale.

Otherwise the task fits the definition of done.

If "it gets the job done" were the best benchmark for code quality, then we wouldn't have had to invent the principle of clean code and good practice to begin with - the compiler and unit testing would already be our automated review process and you wouldn't need code reviews or style arguments.

Andere Tipps

At the end of a 2 week sprint and a task has a code review [...] Easy refactor job.

Why does that pop up at the end of the sprint? A code review should happen as soon as you think the code is done (or even before). You should check your definition of done with each story you finish.

If you find yourself finishing stories so shortly before your demo/sprint review that you cannot fit it a "tiny" task, then you need to get better at estimating your work. Yes, that story did not get finished. Not because of a code review, but because you did not plan to incorporate changes from code review. That's like estimating "testing" to take zero time, because "if you programmed it correctly, it will just work, right?". That's not the way it works. Testing will find errors and code review will find things to change. If it wouldn't, it would be a big waste of time.

So to sum it up: yes, the DoD is binary. Pass or Fail. A code review is not binary, it should be more like an ongoing task. You cannot fail. It's a process and in the end it's done. But if you don't plan properly, you will not get to that "done" stage in time and are stuck in "not done" territory at sprint end. That's not good for morale, but you need to account for that in planning.

Simple: You review the change. You don't review the state of the program otherwise. If I fix a bug in a 3,000 line function, then you check that my changes fix the bug, and that's it. And if my change fixes the bug, you accept the change.

If you think the function is too long, then you put in a change request to make the function shorter, or split it up, after my change was accepted, and that change request can then be prioritised according to its importance. If the decision is made that the team has more important things to do, then it is handled later.

It would be ridiculous if you could decide development priorities during a code review, and rejecting my change for that reason would be an attempt to decide development priorities.

In summary, it is absolutely acceptable to accept a code change and immediately raise a ticket based on things you saw when reviewing the change. In some cases you will even do this if the change itself caused the issues: If it is more important to have the changes now than to fix the issues. For example if others were blocked, waiting for the change, then you want to unblock them while the code can be improved.

Fail the code review, so that the ticket doesn't close in this sprint, and we take a little hit on morale, because we cannot pass off the ticket.

This seems to be the problem.
In theory you know what you should do, but it's close to deadline so you don't want to do what you know you should do.

The answer is simple: Do whatever you would do if you got the same code for code review on the first day of the sprint. If it would be acceptable then it should now. If it would not, it wouldn't now.

A huge part of the process is deciding what done means and sticking to your guns. It also means not over-committing and getting your peer reviews done in time to let testing ensure the work is also functionally complete.

When it comes to code review issues, there's a few ways you can handle it, and the right choice depends on a few factors.

  • You can just clean the code up yourself and let the person know what you did. Provides some mentoring opportunities, but this should be fairly simple stuff that can be done in a few minutes.
  • You can kick it back with comments as to what is wrong. If the error handling is done poorly, or the developer keeps repeating the same mistakes, this can be warranted.
  • You can create a ticket and incur technical debt. The ticket is there to make sure you pay it down later. It might be that you are on a time crunch and in the process of reviewing the changes you see a bigger problem not directly related to the change.

Bottom line is that when you are done with work you need to be done with it. If there are problems larger than what the developer worked on, raise the flag and move on. But you shouldn't be in a position where there are hours before the end of the sprint and you are just now getting to peer review. That smells like over-committing your resources, or procrastinating on the peer reviews. (a process smell).

There are no inherent problems with deprioritizing code review issues, but it sounds like the main issues you need to agree on, as a team, are:

  1. What is the purpose of your code review?
  2. How do results of the code review relate to the Definition of Done for a work item?
  3. If code review does apply as a gating test, what issues are deemed 'blockers'?

This all comes down to what the team has agreed to as the definition of Done. If passing code review with Zero Issues is the definition of done for a work item, then you cannot close an item that has not met this requirement.

It's the same as if during unit testing a unit test failed. You would fix the bug, not ignore the unit test, if passing unit tests was a requirement for being Done.

If the team has not agreed to code reviews being a definition of Done, then your code reviews are not a gating acceptance test of the work item. They are a team activity that is part of your backlog process to look for additional work that might be needed. In that case, any issues you discover are unrelated to the requirements of the original work item and are new work items for the team to prioritize.

For example, it could be completely acceptable for a team to deprioritize fixing typos in some variable names as it does not impact the business functionality that has been delivered, even though the team really hates seeing the variable name "myObkect".

The higher-voted answers here are very good; this one addresses the refactoring angle.

In most cases, the majority of work when refactoring is understanding the existing code; changing it after that is generally the smaller part of the work for one of two reasons:

  1. If just making the code more clear and/or concise, the necessary changes are obvious. Often you gained your understanding of the code by trying out changes that seemed cleaner and seeing if they actually did work, or if they missed some subtlety in the more complex code.

  2. You already have in mind a particular design or structure you need to make building a new feature easier. In that case, the work to develop that design was part of the story that generated the need for it; it's independent of you needing to do refactoring to get to that design.

Learning and understanding existing code is fair amount of work for a non-permanent benefit (a month from now someone's likely to have forgotten much about the code if they don't continue to read or work with it over that time), and so it doesn't make sense to do this except on areas of code that are causing you problems or that you're planning to change in the near future. In turn, since this is the main work of refactoring, you shouldn't do refactoring on code unless it's currently causing you problems or you are planning to change it in the near future.

But there's one exception to that: if someone currently has a good understanding of the code that will leak away over time, using that understanding to make the code more clear and more quickly understood later on can be a good investment. That's the situation someone who's just finished developing a story is in.

The refactor is a small piece of work, and would get done in the next sprint (or even before it starts) as a tiny, half point story.

In this case that you're thinking of making a separate story for refactoring is a warning sign on several fronts:

  1. You're not thinking of refactoring as part of coding but as a separate operation, which in turn makes it likely to get dropped under pressure.

  2. You're developing code that will be more work to understand the next time someone needs to work with it, making stories take longer.

  3. You may be wasting time and energy refactoring things from which you don't get much benefit. (If a change happens much later on, someone's still going to have to re-understand the code then, anyway; that's more efficiently combined with the refactoring job. If a change doesn't happen later on, the refactoring didn't serve any purpose at all, except perhaps an aesthetic one.)

So the answer here is to fail the item to make it clear that something in your process failed (in this case, that's the developer or team not allocating time for review and implementing changes that come out of review) and have the developer immediately continue work on the item.

When you go to estimate for the next iteration, re-estimate the existing story as whatever amount of work seems to be left to make it pass review and add it to your next iteration, but preserving the estimate from the previous iteration. When the story is completed at the end of the next iteration, set the historical total amount of work to the sum of the first and second estimates so you know how much estimated work was really put into it. This will help produce more accurate estimates of similar stories in the future at the current state of your process. (I.e., don't assume that your apparent under-estimate won't happen again; assume it will happen again until you have successfully completed similar stories while putting in less work.)

There are two ways to look at this problem in my opinion:

  1. The academic way
  2. The real world way

Academically speaking, most code review processes exist to fail the deploy of a PBI (product backlog item) when the code quality standard is not met.

However, nobody in the real world follows agile to the T as for one (of many reasons), different industries have different requirements. Thereby fixing the code now or taking on technical debt (you'd create a new PBI most likely) should be decided on a per case basis. If it's going to compromise the sprint or a release or introduce an unreasonable amount of risk, business stakeholders should be involved in the decision.

I'm surprised by the lack of response in the answers and comments to the notion of "failing" a code review, because that's not a concept I, personally, am familiar with. Nor would I be comfortable with that concept or anybody in my team using that terminology.

Your question explicitly calls on "agile practices," so let's revisit the agile manifesto (emphasis mine):

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

Speak to your team. Discuss the code in question. Assess the costs and the benefits and decide - as a cohesive group of experts - whether to refactor this code now, later, or never.

Start collaborating. Stop failing code reviews.

in the review we discover a function that works, is readable, but it's quite long and has a few code smells...

Are there any inherent problems or considerations with raising a ticket off of the back of a review, instead of failing it?

No problem whatsoever (in my teams opinion). I assume that the code meets the acceptance criteria stated in the ticket (i.e., it works). Create a backlog item to address the length, and any code smells, and prioritize it just like any other ticket. If it really is small, then just prioritize it high for the next sprint.

One of the sayings we have is "Choose progressive improvement over postponed perfection".

We have a very fluid process, and build a fairly good number of 'proof of concept' features (1 or 2 per sprint) that make it through dev and test but never make it past internal stakeholder review (hmm, can we do this instead?), alpha, or beta... some survive, some don't.

On the current project, I've lost track of how many times we've built a certain feature, gotten it into the hands of the stakeholders, and a sprint or two later, totally removed it because product direction has changed, or requirements caused a complete recast of how the feature should be implemented. Any remaining 'refinement' tasks for a deleted feature, or that don't fit the new requirements get deleted as well as part of backlog grooming.

Neither. If it fails the code review then the task isn't done. But you can't fail code reviews on personal opinion. The code passes; move on to the next task.

It should be an easy call, and the fact that it isn't suggests that you don't have clear enough written down rules for code reviews.

  1. "The function is quite long". Write down: Functions must be less than X lines long (I am not suggesting that rules about length of function are a good thing).

  2. "There are some code smells". Write down: public functions must have unit tests for functionality and performance, both CPU and memory usage must be under limits x and y.

If you can't quantify the rules for passing a code review then you will get these case of what is basically 'code you don't like'.

Should you fail 'code you don't like'? I would say no. You will naturally start to pass/fail based on non-code aspects: Do you like the person? Do they argue strongly for their case or just do as they are told? Do they pass your code when they review you?

Also, you add an unquantifiable step to the estimation process. I estimate a task based on how I think it should be programmed, but then right at the end I have to change the coding style.

How long will that add? Will the same reviewer do the subsequent code review and agree with the first reviewer or will they find additional changes? What if I disagree with the change and put off doing it while I look for a second opinion or argue the case?

If you want tasks done quickly you need to make them as specific as possible. Adding a vague quality gate won't help your productivity.

Re: It's impossible to write down the rules!!

It's not really that hard. You really mean "I can't express what I mean by 'good' code". Once you recognise that, you can see it's obviously a HR problem if you start saying someone's work isn't up to scratch, but you can't say why.

Write down the rules you can and have discussions over beer about what makes code 'good'.

Lizenziert unter: CC-BY-SA mit Zuschreibung
scroll top