Question

Bounty clarification

I know it's a subjective question. The ideal answer I'm looking is one that explains why the quoted scenario here would be so surprising.

If you think the quoted scenario is it fact not surprising and to be expected, please break down the steps to prove how such a little app can take over a month and several thousand dollar of development. I went quite far to do the calculations (for example, looking up minimum wages) so I expect the ideal answer to do similar.

If you think the quoted scenario is indeed overestimated, please pinpoint exactly your reasons. What mistakes can you spot in his calculation that led to such a huge cost for a simple application like that? How would you have done it differently? (no need to write the whole process but details instead of generalized feelings would be nice)


I know questions about FPA has been asked numerous times before, but this time I'm taking a more analytical angle at it, backed up with data.

1. First, some data

This question is based on a tutorial. He had a "Sample Count" section where he demonstrated it step by step. You can see some screenshots of his sample application here.

In the end, he calculated the unadjusted FP to be 99.

There is another article on InformIT with industry data on typical hour/FP. It ranges from 2 hours/FP to 27.4 hours/FP. Let's try to stick with 2 for the moment (since SO readers are probably the more efficient crowd :p).

2. Reality check!?

Now just check out the screenshots again.

Do a little math here

99 * 2 = 198 hours
198 hours / 40 hours per week = 5 weeks

Seriously? That sample application is going to take 5 weeks to implement? Is it just my feeling that it wouldn't take any decent programmer longer than one week (I"m not even saying weekend) to have it completed?

Now let's try estimating the cost of the project. We'll use New York's minimum wage at the moment (Wikipedia), which is $7.25

198 * 7.25 = $1435.5

From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets).

(For the record, that same Web site has another article discussing productivity. It seems like they typically use 4.2 hours/FP, which gives us even more shocking stats:

99 * 4.2 = 415 hours = 10 weeks = almost 3 whopping months!
415 hours * $7.25 = $3000 zomg

(That's even assuming that all our poor coders get the minimum wage!)

3. Am I missing something here?

Right now, I could come up with several possible explanation:

  1. FPA is really only suited for bigger projects (1000+ FPs) so it becomes extremely inaccurate at smaller scale.
  2. The hours/FP metric fluctuates abruptly from team to team, project to project. For a small project like this, we could have used something like 0.5 hour/FP or something. (Now this kind of makes the whole estimation thing pointless, unless my firm does the same type of projects for several years with the same team, not really common.)

From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain.

What would be the FP experts' answers to this?

Was it helpful?

Solution

About ten years ago, a drinking buddy of mine gave me a really great piece of wisdom. On any project consultation, ask three questions: 1. What is the problem we are trying to solve? 2. What are the deliverables? 3. How will we know when we are done? He added that one should never take on any project for which any of the questions was not answered before the project starts.

In the case at hand, we have yet another Software Estimating Method horror story, in which the estimate seems ridiculously high. I would answer his horror story by pointing out that he has not given answers to the second and third questions, and he hasn't really answered the first, except to say "We want to build something that works something like this."

I would expand on that by pointing out that he explicitly has not even asked what tasks the Function Points estimate is including or excluding from the estimated total. How much extra effort is the function point estimator allowing for documentation, for example? If his estimate is for the application, without any documentation, and the function point estimator's estimate was for the application with full documentation, well, I'd say there's some room for disagreement on the total amount of work (and time) required.

OTHER TIPS

Is it just my feeling that it wouldn't take any decent programmer longer than one week (I"m not even saying weekend) to have it completed?

Developers always tend to underestimate how long it takes to actually finish something. They think there will be no bugs, no changes in requirements, and nothing they've never done before and have to spend days on figuring out.

From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets).

You're comparing the price for a completely custom piece of software to one that's selling millions of copies? Seriously?

The reality is, most methods of software estimation actually underestimate, even though at first blush, it seems counter-intuitive. I once worked at a company where 300 lines of code per man-month was considered a HIGH estimate, and most months we came in at more like 200-250. But let's go with the 200. That's 10 lines of code per work day. Who can't write 10 lines of code in a work day? Come on! I could write 50 to 100 or more lines of code on a good day! And yet companies that use numbers like these repeatedly complete their projects behind schedule and over budget. Why is that? Well, scope creep, as Michael Borgwardt suggests, is a big one. But let's pull that out the picture for a minute, and assume the customer and client got it right the first time. Why would a company estimate only 10 lines of code per day?

  • Analysis of requirements
  • Software design based on requirements
  • Meetings to coordinate interfaces and architecture with team-mates.
  • Overhead costs(status meetings with management, sick-time, vacations, ...)
  • Writing unit tests
  • Writing a test plan for the whole applicaiton
  • Application-level testing

That's all the day-to-day software engineering I can pull off the top of my head in 3 minutes, I'm sure I missed some more, but does that help get a more complete picture of where those estimates are coming from?

Not an FP expert. However we're looking at FP at the moment. In particular we're performing FP analysis against old projects that we have the metrics for effort / cost etc. Then we can assess its usefulness to us in estimation / sizing projects.

My view at this point is that it will be a useful top down 'order of magnitude' estimate to supplement bottom up estimating. It's always good if more than one estimation technique can be applied to help validate that the numbers that are being arrived at 'hold up'.

A further thought - the cost / effort per function point (i.e. functional requirement) would depend on the non-functional requirements that are required for the system. Once you start taking account of security, accessibility, performance, logging (and alerting), maintainability, portability, regulatory compliance and so on, the cost/effort per FP increases significantly. Now these may not be a consideration for the single user sample app quoted. But if this application is important to a company or potentially their customers or a wide proportion of the general public, the need to take into account those non-functional requirements will certainly increase.

Personally i found FPA misleading ... initially.

Unless you have historical FPA data of previous projects, FPA can definitely end up over-estimating the whole thing, using industry standards.

I learned that VAF is a good pointer to use when dealing with FPA. Although it gives you a 35% variation range on your FP count, who is stopping the analyst/project manager from turning this into a 50% variation.

A good team leader always assesses his teams ability before making estimations. Same goes for FPA, industry standard figures were reached based on historical data, and this data varies from company to company, team to team and developer to developer.

So I would say if you use the best case scenario of -35% on the unadjusted count, you reach an adjusted FP count of ~64. Gives you roughly 3 and half weeks of estimate. From experience i would say an application of this sort CAN be done alot sooner than that, but any thorough testing, debugging, documentation and other paper work would stretch it further and FP takes that into account. It is very much possible that your team is doing 1 FP/hr. By normal standards, coding and testing accounts for 25% of the FP count, so in this case even taking your figure of 99 FPs, the coding and testing part would come down to 25 FPs, which is more understandable given the situation.

What i have also seen in practice is that some companies have devised their own complexity tables, so if 3 RETs and 10 DETs mean average complexity for one company, another would rate it as low complexity. This would largely effect the final FP count.

So use the FP tool as guide and collect as much data for previous projects as you can before you actually start relying on FPA to set out cost and time estimates.

As a side note, i think the costing estimates on a simple software like that would seem ridiculous today, where outsourcing and freelancing is the way to go. Large companies who have been in this business still charge ridiculously high for software development. For instance if you want a level 3 support engineer to help you with your servers in a good hosting company, they would charge $250 per hour, however you can get the same advice from someone based elsewhere in the world at $25 or even $2.5.

Hope my 2 cents are of some use to you.

At my previous company we would have calculated like that - especially if someone wants to pay for it ;)

I have practised FP in a few projects and found that it provides a fairly accurate estimate. Sometimes it may overestimate and sometimes underestimate depending on type of application. Typically for scientific applications, FP could be underestimated. FP provides for entire project development time, not just the time for writing code. Of course there are no development activities, like setup of test environment, etc and these should be estimated separately. I am not a big proponent of FP, but appreciate its usage. If not accurate estimation, if practised properly (identifying Files and Record Elements), it atleast validates completeness of your requirements.

In a way, we should say that FP is good for medium to large projects, scaling over 350-400 FPs.

Time-based payments lead to lower performance indirectly. I remember projects with time-based payment that I did a lot of research for each aspect of project while if it had a project-based payment method maybe I didn't do so. It's the unconscious mind not ethics. Best practice is to referring "Project" definition (within a limited time and budget) and make decision based on limitations. It's not about the work itself, i.e. you pay for an umbrella in a rainy day much more than when you buy normally. Don't bother yourself about what has done and how much it worth. Focus on the value of the work to the customer and his choices.

Plugging the values from the example you cited into this handy online function point calculator (http://developergeeks.com/functionpoint.aspx), which calculates the adjusted FPs and takes into account various other weighting factors, I get the following results, assuming a productivity rate of 2 FPs per hour since the system in the example is so simple:

  1. Adjusted FPs: 42.9
  2. Estimated Person Months: 0.54

Assuming 160 hours in a working month, that works out to about 86.4 hours, or roughly two work weeks for one developer. Not five weeks as you concluded in step 2. Given that developing systems for paying customers requires taking just a little more care and effort than just banging out some code late at night for your own amusement, I don't think that's an unreasonable estimate at all.

I mean, don't get me wrong, FP analysis in the wrong hands is probably a terrible idea. But if you've got a developer's background you can apply to counting FPs and gut checking the various weighting factors, it's not a bad way to get a reasonable estimate that isn't based on pure fantasy when you don't have detailed design specs, fully documented requirements or a detailed task-level project plan to work from. But you've got to use some common sense to make it work for you.

From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain.

The point in having Function Point Analysis is having some kind of rules/guidelines which are objective and standard so that it should (within a certain margin) end up giving you the same amount of function points on an application and/or project, regardless of which expert counted it, if the rules are applied consistently and correct. The productivity per function point, as you discovered, is highly dependable on many factors like team experience, tooling, programming language, platform, etc. etc. Therefore industry standards are nice to know, but in most cases completely useless (in my humble opinion). The main value in repetative counting is building up your own 'benchnmark' based on your own team productivity history. This in turn will help you see trends and also help plan and predict hours needed for future changes. If you're looking for speed, simply apply global counts instead of detailed counts. When doing a few example counts (like when preparing for exams) you will notice the difference between a detailed count and a global count aren't big enough to loose sleep over (by %).

This discussion is absolutely misleading, as the question already supposes FPA is an effort estimation technique. It is not.

Functional size (expressed in functions points) can be one of many input factors for an estimation model (such as COCOMO). Not more - but also not less if we agree that the 'amount' of functional requirements is an effort driver for software projects.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top