Sunday, November 21, 2004

Software Sizing, Estimation and Function Points - Part I

posted by ShyK at 11:53

J. P. Lewis in an article in the Jul 01 issue of ACM Software Engineering Notes, concludes that there is no rigorous method for calculating the development time of a software project before actually doing the project. Lewis's argument is based on the notion of algorithmic complexity, which is a measure of the shortest program that will produce a given string. Next, Lewis states a standard result from complexity theory: there can be no program that tells us the length of the shortest program that solves an arbitrary software development problem. Which takes us to Lewis's assertion that there is no rigorous way of estimating the software development time.

Despair Not. Lewis is talking about having a method in place that would be 100% right 100% time while the real-world problem of software estimation is to get somewhere close a reasonable percentage of the time. But the article does help us understand as to why expert estimation is the dominant strategy (over Model based estimation) when estimating software development effort.

M Jorgensen in one of his studies concludes that

  • Expert estimates are more accurate than model estimates when the experts possess important domain knowledge not included in the estimation models.
  • Expert estimates are more accurate than model estimates when the uncertainty is low. Model estimates are more accurate when the uncertainty is high
  • Experts use simple estimation strategies (heuristics) and perform just as well or better than estimation models where the heuristics are valid
  • Experts can be strongly biased and misled by irrelevant information, e.g., towards overoptimism. Estimation models are less biased.

Jorgensen observes that the software development expert estimates are not systematically worse than the model-based estimates, such as the expert estimates in most other studied professions perhaps because the importance of specific domain knowledge (case-specific data) is higher in software development projects than in most other studied human judgment domains.

As he goes on to state - Expert estimation of effort is frequently a “constructive” process. The estimators try to imagine how to build the software, which pieces that are necessary to develop and the effort needed to implement and integrate the pieces. Empirical results from human judgment studies suggests that this type of process easily lead the estimator into the mode of confirming theories on how to complete the project, rather than reject incorrect hypotheses and assumptions. This leads to the famous pitfalls or sins in estimation.



Additional Reading

--to be continued --

Labels:

Post a comment 2 comments
2 Comments:

Actually I think this is incorrect.

The article is talking about a 100% objective estimate, but not necessarily a 100% accurate estimate.

For example, a statistical estimate can be objective, even if it is not 100% accurate.

By Anonymous Anonymous, at 9:46 PM  

Actually I think this is incorrect.

The article is talking about a 100% objective estimate, but not necessarily a 100% accurate estimate.

For example, a statistical estimate can be objective, even if it is not 100% accurate.

By Anonymous Anonymous, at 9:49 PM