Question

I support a large enterprise software project which frequently receives enhancement requests from our customer. The customer will only pay for the work up front in fixed-price contracts. We provide a SWAG estimate first, then provide a detailed estimate after they green light the SWAG. The detailed estimates are time consuming and we are only compensated for the estimation time when they sign off on the enhancement, so the SWAG estimate provides a level of protection for us.

We communicate our SWAG estimate as Small, Medium, Large, or Very Large, and we have communicated ranges associated with each of these values. For example,

Small: < 5 days
Medium: 5 - 15 days
Large: 15 - 50 days
Very Large: > 50 days

Having put this into practice for a couple of years, I have some concerns:

  1. Sometimes the SWAG estimate is expected to be at the high end of a range. This form of estimate can make it difficult to manage customer expectations, a 15 day effort is very different from a 50 day effort, and the customer can green light a SWAG under the optimistic assumption.
  2. If the customer approves of a SWAG estimate, we can feel obliged to cap our detailed estimate at the high range of the SWAG. If we move up a range then there are usually additional billing discussions which are painful, slow, and lacking guarantees for compensation.

Are there any kind of standard practices, or tried and true methods for communicating SWAGs that can help us better manage customer expectations?

Is it better to use use ranges on a per case basis, rather than the fixed ranges that I have noted? Should I switch to single number instead of a range?

Was it helpful?

Solution

There's actually an old approach that addresses this kind of problem that comes from the PERT and CPM practices. You might get resistance if you mention that since this is associated with waterfall and PMI. But it really has nothing to do with waterfall and I haven't seen much evidence that certified project managers know anything about it (despite it being part of certification).

You come up with 2 or 3 estimates for each item. You give a best-case scenario i.e. in lower bound i.e it will not get done faster than this. Then you come up with a high-end estimate i.e. it should never take long than that. Optionally, you provide a 'best-guess' estimate that falls somewhere in between. This helps to weight the estimate towards the high or low end but it's not strictly needed. You do this for each high-level deliverable. Part of what people get wrong about this approach is they try to do it at an extremely fine-grained level. That doesn't work / isn't worth the effort.

Then you lay out and dependencies and using a well-known algorithm, you can come up with an overall estimate with a confidence level for the entire bundle. One thing that's good to understand that the more items you estimate this way, the more reliable the overall estimate as things tend to even out.

You could use this approach with t-shirt sizes for the component deliverables. If the ranges are pretty reliable, then it should be fine. Some people may try to tell you that this is incompatible with agile methodologies but that's really not the case. It's simply a way to take a bunch of estimates and turn it into a composite one using well-studied statistical approaches. Really good agile practitioners use statistical methods. It's generally not something most of the team sees, however, and much of Agile has become simply a cargo-cult process.

OTHER TIPS

I've been asked to estimate jobs in every conceivable way. I've seen estimates go wrong in many ways.

I can hit any deadline with a fuzzy enough scope of work.

So, presuming a scope of work with some meaningful acceptance tests in it I'll tell you the secret to generating the only meaningful time estimates I've ever encountered.

Imagine the work you've been asked to do is harder than anyone expected. Imagine that it's already been however much time anyone thought it would take and it's still not done. Now how much longer are you willing to waste time on it before you're sick of this and want to try doing anything else?

That process generates the most realistic time estimates I've ever seen. Has nothing at all to do with the poorly defined, not yet understood problem. It's all about how patient we are with getting the solution this way before we want to tear up the whole plan.

The only way I've ever improved on this was to prototype a solution before the job ever starts so I already know it will work. Some people work cookie cutter jobs that are the same thing over and over. That's pretty much the same as prototyping. Those kinds respond best to detailed specs. Mostly because those jobs are about not forgetting to do something.

For SWAG it's really about how people feel. The reality around the problem. Not the problem. The job ends when it's done or people get sick of waiting and try something else. We're really estimating our patience with the problem.

As for communicating a SWAG estimate you've already taken care of one of the biggest sources of misunderstandings. By expressing time durations as jobs that are small, medium, large, and very large you prevent the perception that you said it'd be done in exactly 50 days. Such numbers are always fuzzy. Customers shouldn't be led to think that they aren't.

However you communicate it, listen to your customer. Try to get a sense of how well you're being understood. What seems perfectly obvious to your might come as a complete surprise to them. This process has enough uncertainty in it without adding needless surprises.

Sometimes the SWAG estimate is expected to be at the high end of a range. This form of estimate can make it difficult to manage customer expectations, a 15 day effort is very different from a 50 day effort, and the customer can green light a SWAG under the optimistic assumption.

This is exactly why I explicitly tie a SWAG estimate to how people feel about the work. If the customer is obviously uncomfortable with the idea that a task could be a 50 day effort then don't give into the temptation to sell it as a 15 day effort without nailing down the scope of work to something that fits well within a 15 day effort. Never let how you feel about time get compressed without saying what will be lost by doing that.

If the customer approves of a SWAG estimate, we can feel obliged to cap our detailed estimate at the high range of the SWAG. If we move up a range then there are usually additional billing discussions which are painful, slow, and lacking guarantees for compensation.

Well you are obliged. In fact you are obliged to not wait to start the additional billing discussions. If after the first day on a 15 day job it becomes clear it's going to be a 40 day job you the report this then. Not after 15 days. If the customer says no the responsible thing to do isn't to go crazy trying to get done in 15 days. It's to be willing to walk away.

Personally, I like to give the customer the best understanding of progress as I can. When I'm working with a deadline I give daily updates about how likely we feel it is that we'll meet the deadline. When I'm working a checklist on estimated work I encourage evidence based scheduling. That is, to use velocity to take how closely we're matching our estimates and extrapolate our completion date. This can be recalculated after every milestone. It's good to see this recalculated often.

Doing that doesn't do a thing to make the work go faster but it gives those waiting on it the feeling that they understand what's going on. Managing those feelings is actually the most important thing if you want them to keep giving you work.

Its an inherent problem with high level estimates. You're bound to make some assumptions as you haven't done deep dive analysis. Sometimes, you get punished for this assumptions as and when they turn out incorrect. I would recommend creating S, SM, M, MH, H ranges as <5, 5-10, 10-15, 15-20, >20

AS part of conveying your TShirt size estimates, do highlight any high level assumptions so sponsor is clear on implications. For what its worth!

Usually when I'm communicating with someone about estimates, whether they are SWAGs or more detailed analysis estimates I'll try to express and emphasize the uncertainty in the estimate.

For example, you've talked about giving a range on the time a feature would take and I think that's good. It shows that there is a range of times something could take. However, I'm not sure I would always respond with those small, medium and large labels with those assigned time ranges. There is a large difference between a project that will take 14 to 18 days and one that will take 40 to 50 days. Yet they would both be classified as "large". I think a better estimate would be giving those ranges outright. If a client knew upfront that you thought a task would take 40-50 days, they shouldn't ever be so optimistic that they believe it'll get done in 15 days (and if they do, well, that's on them when they don't get what they expect).

I also tend to try to give people estimates with confidence metrics. What that means is that I'll come up with a range of time for the estimate and express how confident I am that we could hit that estimate as a percentage, usually for different parts of the range. So for example, say a client comes to me with a new feature they want. And we'll say that this is a feature on a website that will involve some UI work and making some new database and API calls / endpoints. So I might break that down like this:

Database work:
Create new sprocs for getting data - .5 days
Testing sprocs (correct returns, validating performance, etc) - .5 days + up to .5 days

API work:
Create new stubbed endpoint - .25 days
Create hook stubbed endpoint to DB and call sproc - .25 days
Test endpoint - .25 days + up to .5 days (for bug fixes)

UI work (we'll assume the front end is a mess and doing anything there is a nightmare of brittle, interdependent code):
Adding button to existing page for feature - .25 days
Making AJAX call to endpoint - 1 day
Fixing everything that has broken so far in the UI - .5 days + up to 1 day
Testing the UI / end to end tests - .25 days
Fixing more broken things the test uncovered - .5 days

So add all that up and I get 4.25 days + up to 2 days extra. Then I add in some guesses based on how much variance I expect in my estimates (either high or low) and I might come up with a range like 3 - 8 days with an expected time of 5 days. Then I would tell someone something like this:

Estimate (days)  Confidence
3                40%
5                80%
8                90%

And that might get told to the client like this:

So we've looked briefly into doing feature X for you. We came up with an estimate of 3-8 days to do it. We expect that it will most likely take about 5 days to do. What all that means is that we are about 40% confident we could get it done in 3 days, 80% confident we could get it done in 5 and 90% confident we can get it done in 8 days. This is just a preliminary guess that we made based on our experience with they system and similar features. This is meant to be a guess only and should not be used as a guarantee that this feature will be done in any specific time frame.

I realize that some of that is a bit awkwardly worded, but I believe I've made the point here. That point being that if you can communicate how certain or uncertain you are about something, it lets people make better choices about what you are telling them. And being explicit about your uncertainty keeps others from making assumptions about that.

Compare the previous example to one like this:

We've done an analysis on feature Z. There are a lot of unknowns here and we would be working in a delicate part of the system that has been known to cause problems if we aren't extremely careful. We've estimated that this will take between 40 and 80 days (30% confident for 40 days, 50% confident for 70 days and 55% confident for 80 days). We think it might even take as long as 120 days. We recommend doing more thorough requirements and planning to nail this down a bit more if you are still interested.

This might be enough for a client to say "nope, not worth it". But it also prepares them for the idea that this isn't a quick fix and that more work is required to get a better estimate.


I'd recommend reading Software Estimation: Demystifying the Black Art by Steve McConnell. There are a few chapters about how to communicate estimates to different people depending on what they need them for and how to get them to understand what you are trying to say. (I'm not affiliated with this book in any way, I just like it.)

Hy,

my experience is that such high level estimations go terribly wrong and some boundaries should be established.

  1. Define until when is the customer willing to spend money. Every customer has a budget and wants a max out of it the problem begins when he has no clue what his needs are or his wishes are unrealistic. In you case this sounds like somebody tried something strange and now there is no trust.
  2. Define hard requirements and deliveries. In your case must everything be hard defined so you have to define milestones. Some caution is advised here. Every change request must be negotiated separately, every milestone has to be renegotiated and any change in a a milestone comes with a premium.
  3. One way how to solve the time estimation issue is to write to the estimation a certainty factor in percent, this factor would give a value how certain success is in the estimated time. Mostly this scares the hell out of the management.
  4. The next problem is resource allocation (developers, architects, etc.) and the order of task execution. Most problems arise when the customer identifies some feature that he needs tomorrow. That why a signed time table with milestones is key. It saves you alot of trouble.

If I come up with anything else I'l edit my post.

Licensed under: CC-BY-SA with attribution
scroll top