Question

I'm working on an automated regression test suite for an app which I maintain. While developing the automated regression test, I ran across some behavior that's almost certainly a bug. So, for now, I've modified the automated regression test to not register a failure--it's deliberately allowing this bad behavior to go by, I mean.

So, I am interested in the opinions of others on this site. Obviously, I'll add a bug to our defect tracking to make sure this error behavior gets fixed. But are there any compelling reasons (either way) to either change the regression test to constantly indicate failure or leave the regression test broken and not have a failure until we can get to fixing the defective behavior? I think of this as a 6 of one and a half-dozen of the other type of question but I ask here because I thought others may see it differently.


@Paul Tomblin,

Just to be clear--I've never considered removing the test; I was simply considering modifying the pass/fail condition to allow for the failure without it being thrown up in my face every time I run the test.

I'm a little concerned about repeated failures from known causes eventually getting treated like warnings in C++. I know developers who see warnings in their C++ code and simply ignore them because they think they're just useless noise. I'm afraid leaving a known failure in the regression suite might cause people to start ignoring other, possibly more important, failures.

BTW, lest I be misunderstood, I consider warnings in C++ to be an important aid in crafting strong code but judging from other C++ developers I've met I think I'm in the minority.

Was it helpful?

Solution

I would say "hell yeah!". The simple fact is, is it failing? Yes! Then it should be logged. You are pretty much compromising your testing by allowing a failed test to pass.

One thing that would concern me personally, is that if I did this, and went under a bus, then the "patch" may not get removed, meaning even after a "bugfix" the bug may still remain.

Leave it in, update your project notes, perhaps even move the severity down (if possible), but certainly dont break the thing that is checking for broken things ;)

OTHER TIPS

If you stop testing it, how are you going to know when it's fixed, and more importantly, how are you going to know if it gets broken again? I'm against taking out the test, because you're likely to forget to add it back in again.

We added a 'snooze' feature to our unit tests. This allowed a test to be annotated with an attribute that basically said 'ignore failures for X weeks from this date'. Developers could annotate a test they knew would not get fixed for a while with this, but it didn't require any intervention in the future to manually re-enable it, the test would simply pop back into the test suite at the designated time.

It should remain a failure if it's not doing what was expected.

Otherwise, it's too easy to ignore. Keep things simple -- it works or it doesn't. Fail or success :)

-- Kevin Fairchild

While I agree with most of what Paul said, the other side of the argument would be that regression tests, strictly speaking, are supposed to test for changes in the program's behavior, not just any old bug. They're specifically supposed to tell you when you've broken something that used to work.

I think this boils down to what other sorts of tests are run on this app. If you have some sort of unit test system, maybe that would be a more appropriate place for this test, rather than in the regression test (at least until the bug is fixed). If the regression tests are your only tests, however, I would probably leave the test in place.

While it is best to have tests fail when there are bugs, this is often an impractical choice. When first adding test to an area, it is particularly easy to get bogged down in the bugs one discovers and therefore not finish the primary objective. Also, long-failing tests become so much noise, easier and easier to ignore.

Writing failing tests is part of fixing bugs, and is only really important when fixing those bugs is important. This should be determined by your current product and quality priorities. If you happen to produce tests as a side effect of some other effort, that's nice but should not be allowed to distract you from your actual priorities.

I would recommend making any bugs discovered while improving testing into warnings and filing bugs against them. It's a breadth-first approach that will get you through the current task while also actually using what you learn. The tests can then be easily made into real failures when the bugs are scheduled for fixing.

Unlike other respondents, I think that failures are just as easy to ignore as warnings, and the cost of training your team to ignore failures is too high to let it happen. If you produce failing tests as part of this effort, you will have destroyed the utility of your regression test suite when the task was improving it.

Having a failing test is kind of grating. There's a difference between broken code and unfinished code, and whether the test should be addressed immediately depends on which circumstance this failing test exposes.

If it's broken, you should fix it sooner rather than later. If it's unfinished, deal with it when you have time.

In either case, clearly you can live with it behaving badly (for now), so as long as the issue is logged you might as well not have it nag you about it until you have time to fix it.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top