Why sacrificing good software engineering practices is typically the first choice for software development projects assuming “good enough” quality [duplicate]

softwareengineering.stackexchange https://softwareengineering.stackexchange.com/questions/270602

Question

I have observed a correlation between a customer ordering software of "good enough" quality and the same customer not willing to pay for good engineering practices (unit testing, code reviews and the like) that many times that I would call it causality. Still, I am having hard time coming up with any logical reasoning behind such thinking so I would appreciate some insights from the community here.

As I see it, the engineering practices mentioned above have effect on only two quality attributes: maintainability and, to a lesser degree, reliability. Except for throw-away prototypes, I can hardly imagine a customer in their right mind who would want to get software that

  • crashes every now and then (although I realize that MTBF expectations vary depending on the cost of error), and/or
  • incurs considerable maintenance costs down the road.

On the other hand, other quality attributes such as scalability or performance are not directly affected by the engineering practices (if we don't count architecture reviews which are kind of from another league). What's more interesting, relaxing one or more of these could lead to way more considerable cost reductions, while not following proper engineering practices might as well lead to increasing project costs due to so called "endless debugging / stabilization cycle"

I should be missing something important since the preference to "save" on not doing software development properly is so prevalent among customers.

Please enlighten me :)

Was it helpful?

Solution

This really depends on the reaction of the customer when they get software they don't like. Very few will say, "Gee, you were right. We should have had tests." but most will feel like you should have been able to make those "simple" changes anyway.

If you continue to struggle with customers, you need to stop asking them if they want you to create tests. Provide quotes based on your expert opinion for doing quality work. Do you ask them if they want you to use and IDE that is more productive or if you should automate the build?

Customers need to understand they are assuming the risks when they create restrictions on a project. Instead of saying, "I told you so." you should get in the habit of saying, "No."

OTHER TIPS

I've never yet seen a customer who asked for poor-quality software. However, what they do want is the cheapest software, and they want it as soon as possible.

The cheapest and quickest way to develop software is to throw something together as quickly as possible, then debug it until it does pretty much what you wanted, and doesn't crash too often.

At that point the developer can ship the software and bill the customer. The seller is happy because they get paid. The purchaser is happy because they got what they ordered very quickly at a low price.

The end-user gets buggy and unreliable software. But the end-user isn't the one who makes purchasing decisions.

Long-term maintenance costs are not a consideration at any point in this cycle.

From what I have encountered, usually, to build what can be considered as good software, one needs to have a clear set of requirements, maybe not all of them, but at least a set of core rules which will not change. From this, you then choose your architecture, design patterns, languages and frameworks.

The problem with this is that most of the times, the client themselves do not really know what they want. They will usually know what they will want to achieve, but when you start asking questions as to why they want certain functionality as opposed to other functionality, most of them usually blank out. To add insult to injury, some clients could get offended with all your questions since they deem them to be excuses so that you do not do the work and as a waste of time.

Clients usually want solutions fast and they want them cheap (in the short run at least). They do not really care what you did as long as you deliver what they asked for. From my experience, clients will always go for the quick solution even if it will require loads of maintenance. Taking months to properly design a system which is easy to upgrade is something which, from experience at least, does not happen.

I think your reasoning is flawed because you consider "best practices" as boolean.

However, it's not a question whether you apply them or not. It's "how much" you apply them.

For example testing: you can do a couple of manual tests, cover the basic tests automatically, make an extensive automated suite, test also all your libraries/dependencies, double-check it by third parties...

For example code quality: code as you go, take time for a clean design, refactor often, double check everything, triple check by third parties, formal proofs...

Everything has a cost. Basically, the more you want to ensure that you software is bug free, the more it costs. The testing suites and frequent refactoring can quickly be more than the development itself. Now, what's the cost of a bug? ...every client is somewhere on this price/bug curve.

Some people just want something basic and would prefer to live with a bug once in a while rather than pay the extra bucks. There are plenty of examples for this:

  • people prefering a free tool even if it's known to have several quirks
  • people just wanting a prototype
  • people wanting software quick/cheap because they need quick ROI to stay afloat
  • people who don't want to spend a lot on software

there are also other extremes. Software for aircrafts, aerospace financial and medical application are tested so thoroughly that even tiny changes quickly require millions because a lot of stuff has to be thoroughly verified. And don't even think of updating an open source library ...or even use "normal" hardware! ;)

Here, it is also frequent that the same module is implemented in parallel by independent teams. The modules are then run in parallel. If at some point their output differs, some kind of decision making takes place, like taking a "majority wins" approach if there are three modules available. ...so, are you ready to develop the same software by three different teams just to ensure it is reliable? ;)

There are already several good answers.
I would add that sometimes time to market is one of the requirements. For example, there are industries where a large fraction of sales are generated at trade shows; that means a hard deadline, and if you miss it, you've lost opportunity and probably market share. In such a case, it's typically necessary to have your product ready for "the big expo" -- even if it has a few bugs or you had to incur some technical debt.

Delivery Quality can be expressed as a function of cost and time.

Future cost of software for maintenance and enhancements can only be expressed as function of current delivery quality.

"Good enough is fine" --> I can live with the occasional crashes, maintenance cost, increased future feature adding cost..what I need NOW is a good enough software.. not a bad business decision under given circumstances (of cost and time)..in such cases goes out of the window are extra (less) lines of codes, refactoring, unit tests to some degree, and so on..

"Good enough is fine" and "I am also not seeing the increased future cost for maintenance and enhancement" --> I am an idiot and you are an unfortunate soul happen to work for me.

Licensed under: CC-BY-SA with attribution
scroll top