문제

Let's say I want to use COCOMO to estimate the effort to produce a 100 KLoC embedded project. Not including the Effort Adjustment Factor, the effort would be 2.8 * 100^1.2 = 703. However, the project has two distinct components. I claim that on paper each component is its own project of 50 KLoC, which brings the estimate down to 2 * (2.8 * 50^1.2) = 612.

Which is correct? Would the second estimate be correct only if I organize the whole thing as two entirely separate projects with two separate teams, etc.? Wouldn't that increase the overhead, and thus increase the effort over the original estimate? Or is COCOMO simply too imprecise for there to be any real distinction between the two estimates (and I should then just treat either of them as simply "on the order of 500-1000 effort")?

도움이 되었습니까?

해결책

COCOMO theoretical point of view

COCOMO is a model based on a statistical correlation between size of the software in KLoc and effort. The intermediate model that you seem to have used also takes into account some additional cost drivers in from of estimation adjustment factors.

The COCOMO's estimation principle is that the product is an independent software that forms a whole. So if you can split your code into 2x50 Kloc, and each part could run independently of the other, your calculation would be correct.

COCOMO and the reality

However, this seems an ideal and improbable case:

  • if both parts are somewhat interdependent, the EAF would increase because of the added complexity in each part (due to the interdependence with the other one), and the increased reliability requirement (since the other part depends on it). If this would bring the EAF of the split products to, say 1.3 instead of 1, the overall effort would be of 795. So higher than the initial 100KLoc estimate
  • if both parts are more interdependent or even highly interdependent, the KLoc size of the whole will certainly not be 100KLoc = 2*50KLoc anymore, because of the extra coordination, synchronisation and communication effort. Moroever the EAF would increase as well, inter alia because of additional synchronisation/performance constraints on the top of complexity.

In conclusion, your ideal case is very hypothetic, and certainly far from reality.

Intuitively, we can feel that this less ideal figure is certainly more realistic: spliting a big product in two will require more efforts because of the additional interactions to be handled. Managing code reuse between the two parts also brings extra challenges as it can no longer be modified in isolation.

COCOMO accuracy

You must be aware that:

  • COCOMO figures are based on a relatively small number of big projects (older sources state 70-80 such projects, more recent work mentions figures above 300);
  • COCMO assumes that all the requirements are known in advance, so that there is no discovery overhead.
  • COCOMO is not really adapted to cope with incremental development methods
  • Finally KLOC is no longer a meaningful and consistent measure when it comes to derive the effort from it: we all can write long code with lots of repetitive part whereas a better design could result in less lines of code but would require a higher effort.
라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 softwareengineering.stackexchange
scroll top