Pregunta

Disclaimer: I don't expect zero tech debt. In this post, technical debt problem refers to severity that has been causing negative impact, say productivity.

Recently I was thinking to build a tool to automatically generate tech debt report from issue tracker - introduction rate vs cleanup rate over the time. Apart from the total, there'll also be numbers broken down by project team and by manager, so that managers could easily get insight on current tech debt level, without delving into issue tracker and details (such tool might already exists, I need to research to avoid reinventing wheel).

Motivation wise, tech debts have been snowballing for years. Whenever developers increase project estimate to include tech debt clean up, more often they will be asked to remove those numbers from estimate, so refactoring/clean up works usually ends up indefinitely postponed. I hope the periodic report will help to improve tech debt management issue.

However, on second thought, I wonder will increasing visibility of tech debt level really helps to raise priority. Generally, is tech debt issue an org culture issue or just lack of tool/insight? I supposed there's no universal answer, I wonder which is the more common cause. What's your experience?

--- Update 2/28

Clarification: I believe most management are intelligent enough to realise there's impact, especially after teammates reported pain in terms of project productivity. My gut feeling is that, they don't have a concrete picture about how serious problem is. My idea is to help management to gain clearer picture, via two steps:

  1. Have techdebts logged, and have their impact tracked. (there are challenges, but that's beyond scope of this question)
  2. Have a report for introduction rate vs cleanup rate (there could be further breakdown by high/low impact).

My curiosity comes from, will these efforts help or are they just waste of time, generally speaking (not specific within my org) - hence the question what's your experience. If it's org culture issue, then most likely these efforts won't help much.

¿Fue útil?

Solución

Anecdotally

I am a consultant developer. I have been hired on several occasions specifically to "fix the development issues". Some customers are aware of issues in their development process, whereas others only see it as a bunch of bugs that need to be fixed without looking at the cause of the bugs (i.e. bad coding practices).

In my experience, one company that asked for help in fixing the development process was actually interested in taking steps to improve the development process. In other companies, their interest existed up until it required action from their side (e.g. reprimanding a developer who actively rolls back refactoring or improvements, or actually giving me access to the tools needed to set up a CI/CD pipeline).

Based on my experience, bad practice starts off as a developer deficiency. Not a willful one, but rather a matter of either inexperience or general corner-cutting attitude. Whatever the cause, these developers will show quick results due to not taking the time for due diligence such as testing, reviewing or refactoring.

Management will notice those quick results, and will over time come to expect this efficiency. They don't handle the fallout from bad practice (i.e. bugs) directly, but they do benefit from the shorter development times.

At this point, it becomes a feedback loop. Management communicates expected (quick) deadline. Developers are forced to cut corners to achieve it. The codebase degrades. The initial quick release turns into a maintenance cycle of unclear and erratic bugs, regressions, and general lack of readability. In order to cope, while keeping up with the continuing demand for quick results, developers are forced to cut corners in their bugfixes.

The cycle continues, while the quality and performance of the codebase is eroded, and also the good practice skills of developers erode and start being regarded as "needlessly" time consuming. If some developers stick to good practice and others don't, management will judge them based on how quickly they deliver - without observing the bugs or the causes of the bugs.
The good practice developers are deincentivized, the bad practice developers are incentivized. Over time, due to positive/negative feedback from management, the bad practice developers take on a more leading role than the good practice developers, and the bad practice becomes the law of the land.

Speaking from the experience of a company whose main workforce was external consultants, the good practice devs simply leave or become disenfranchised bad practice devs. The (initial) bad practice devs stick around. This perpetuates the imbalance of the bad practice devs having seniority over the good practice devs.

At this point, bad practice has become an endemic company culture. It is reinforced from all sides (including the sales department in case of dev companies), and any good practice suggestion that pops up is often drowned out by the popular support for bad practice, combined with management's intolerance for longer deadlines.

This devolution is something I've observed with at least three different companies. The same events and general work climate pervaded through all three companies.


The monkeys and the ladder

Whenever I talk about detrimental company culture, which often manifests as a "this is how we've always done it" attitude, I am reminded of the parable of the monkeys and the ladder.

enter image description here

Suppose I had turned off the shower around the time of picture 4. The monkeys could have gone up that ladder without any repercussions, but their "company culture" prevented it from happening, based on what is now an outdated idea (since the shower is no longer active).

This parable touches exactly on the erosion of good practice that takes place. Popular but misguided support for bad practice inhibits anyone who tries to make a change for the better by introducing good practice.

The issue isn't with social checks and balances. The same principle is used in other companies to keep up the good practice and squash any bad practice suggestions.

The issue is with the blind acceptance of "things are done this way" without ever being able to re-evaluate. When it reaches that stage, the behavior is a company culture.

Answering your questions

Generally, is tech debt issue an org culture issue or just lack of tool/insight?

It depends what stage of the process you are on. In the beginning, it's a lack of insight and/or tooling. But when combined with management that looks only at results and not ongoing issues and thus wrongly (possibly unknowingly) incentivizes the bad practice, it becomes a feedback loop and over time turns into company culture.

Otros consejos

Usually, it is neither. Usually it is mainly a problem of communication.

What many fail to realize is that technical debt is not a big problem for a company, precicely like financial debt is not.

Interest, on the other hand, is. It would be irresponsible to make unneccessary installments on a loan with very low or zero interest.

When you talk about reducing technical debt, what you are actually asking to do is installments och that debt. Since management is unaware of the interest (ie cost) of that debt, this is not viewed as important.

What you should do is to highlight, not the work required to reduce technical debt, but the extra work required to deliver new features (anr/or fix bugs) due to this debt.

So (to take some easy examples):

Do not ask permission to set up a CI/CR pipeline. Do specify that every release takes an extra half hour per environment. Do not ask for automate testing. Do specify that you use two days of manual testing for things that could have been automated. Do not ask for time to refactor. Do specify those extra hours you spent searching through poorly structured code.

This way you raise issues that actually matter to management. And if management is smart they can even decide if it is actually worth fixing the problems or not. (Why fix tech debt in a product that won't be supported next year)

We already had such a report available in our issue tracker (Jira). When the introduction vs cleanup graph started getting out of hand, management indeed devoted more resources to fixing it. I think visibility definitely makes a difference in prioritization.

The main problem is, the visibility is all about the time between a defect being recorded and it being fixed. This resulted in more time being devoted to fixing recorded bugs. This resulted in fewer people being available to create new features. Developers feel stretched thin and rushed so they are skipping things that don't show up directly in reports, like refactoring and automated tests.

We're now in this weird situation where we are paying down debt faster, but also generating it faster. That goes to your culture question. Having priority to fix highly visible bugs is very different than having a culture of building quality in to begin with. The latter is much more difficult to cultivate because it is more difficult to measure and feels like more up front effort.

The SCRUM canonical answer is that the team owns the tech debt. The team builds the sprint backlog. The team thus controls the rate tech debt is reduced.

In practice, angry managers will appear if you actually do this and escort you out of the building :)

More seriously, it's important to have good communication with the PO about what tech debt is causing issues in production and which tech debt causes current feature implementation to be slow. That will help you drive resolution of critical tech debt without alienating the PO and have one strategy for your deliverables.

Whenever developers increase project estimate to include tech debt clean up, more often they will be asked to remove those numbers from estimate, so refactoring/clean up works usually ends up indefinitely postponed

Why does this happen?

Ask your management. Make sure they're aware of this phenomenon, and get their justifications for it.

Working on tech debt needs to be 'sold' as a feature. Your management needs to be able to apply a cost/benefit analysis to the effort necessary to resolve the debt/the value of resolving the debt.

That having been said -

The development team owns the code. If you need to refactor before you introduce X feature because the team, as a whole, understands that getting further out on that limb endangers the future of the project, then include that in your estimate, and tell them that's what needs to happen.

They don't get to tell you what needs to happen to implement a feature - They can't. It's not their job or their skillset, and they don't have enough knowledge to make a judgement anyway, and they know it. If you tell them it'll take X story points, and the team has consensus around that figure, they'll accept it on fiat.

Have standards, stick to those standards, and have faith in the significance of your own standards. This is part of your job, because the people feeding you features have no context to set those standards for you.

If you can't get consensus on your team regarding what refactor work needs to get done for a given feature (IE, whether or not it's up to aformentioned standards), that's either a cultural problem that can be addressed internally, or an indicator that this tech debt isn't actually accruing in areas that are as critical as they seem from your perspective - And you'll need to pursue that conversation at length with your team to find out which one it really is, and be willing to understand that it's probably at least a little of both.

XP solved this problem a couple of decades ago in a very simple way.

Customers, or product owners, or whatever you care to call them, cannot ever declare how long a story will take. They can only put stories in the order they'd like them to be done and, if they don't like certain estimates, negotiate with the developers to change stories to be cheaper.

The development team (which cannot change story priority, ever) is responsible for maintaining an appropriate level of technical debt, determining when it can be increased for business purposes and under what schedule it must be paid back, and so on. The time and effort necessary to handle this at any particular point is never shown to the customer, but merely built into the current estimates for stories.

In particular, you never have a story about refactoring or reducing technical debt or anything like that because the customer has full control over the scheduling of the stories, up to and including the ability to say, "I want you never to do that one." If you are telling the customer to make decisions about technical debt, you are abdicating your responsibility as a developer to ensure that the project can move forward at a predictable rate. (Do you seriously think that the customer can be better, or even as good, as you at understanding the future effects of technical debt in the codebase?)

So split your management of projects into techinical management (the "developer team," which cannot set story priorities) and product management (the product owner or "customer" in XP lingo) who cannot make technical decisions, including decisions about how much technical debt one should hold or how much time and effort one should spend at the moment paying back existing technical debt.

If for cultural or other reasons you do need to have someone making decisions about both of these, at least try to have them differentate these two roles and keep them independent, i.e., they're wearing the "developer hat" or "product owner hat," but never both at the same time. (This worked well for me when running my own company.)

I tend to be in favor of tracking technical debt in the issue tracking tool and tagging it appropriately. In Jira instances, that often means an issue type. In Github, a label. Other tools may have other methods. However, doing so requires awareness of the technical debt to begin with. I'd be concerned that any reports don't give an accurate picture of the state of technical debt in a product, but rather the state of known technical debt in the product. Encouraging people to evaluate and track technical debt in the issue tracking would be a process, and perhaps a cultural, change for the organization.

It's going to depend on how your organization views technical debt. Before working on a given module or component, the developer can query the issue tracker for known technical debt and this can help guide estimates. Knowing known deficiencies in test coverage or tight coupling or other things that are now seen as poor design decisions can give the necessary insights to help guide estimation and planning activities. However, the product and project managers can always push back on these estimates with deadlines and committed dates. It's a balancing act.

It sounds like a culture issue

Whenever developers increase project estimate to include tech debt clean up, more often they will be asked to remove those numbers from estimate, so refactoring/clean up works usually ends up indefinitely postponed.

To me this screams culture issue. It seems like management is aware of the problem (maybe not fully, but at least somewhat), and doesn't want to hear it. That something is more important to them than code quality right now. How is management incentivized? What counts as success for them? Stable, maintainable code? Or splashy new features, time to market, etc.? It sounds like it's the latter. In which case telling them the code stinks isn't going to help anything, because quality isn't how they define success. To them, it doesn't stink. So ultimately you need to first learn what your corporate dev culture is, before you can decide whether and if so how to tackle this problem.

Licenciado bajo: CC-BY-SA con atribución
scroll top