What is to be measured, and What for

Software engineering metrics can be set along three core rationales: predictive, preventive, or corrective.

  • Predictive: estimators are used to define schedules and plan resources according to task scale and complexity.
  • Preventive: statistics of past problems are used to anticipate risks, take precautionary measures, and allocate resources accordingly.
  • Corrective: assessment of what works and what doesn’t, and what could be done to improve processes, methods, and metrics itself.

    What and What for

Hence, as far as software engineering is concerned (business value ), measurements should address three different topics:

  1. Size and complexity of the problem at hand. Function Points (whatever the variants) is the metrics of choice for system requirements. Metrics targeting products like instructions or lines of code are of limited interest due to their dependency on platform and technologies.
  2. Assessment of project achievements and resources used.
  3. Assessment of process maturity: resources, schedule, reliability, etc.

On those accounts, the main benefit of architecture driven system modelling is to provide:

  • A sound and unbiased basis for function points computation, free of qualitative or expert-based inputs. More specifically, metrics can be directly associated to functional requirements like persistency, entry points, coupling, etc.
  • A straightforward approach to project planning: instead of top-down, one-fits-all task definitions, work units can be set bottom-up depending on the nature of development flows.
  • With tasks directly mapped to development outcomes, processes can be designed along development patterns, their capabilities precisely assessed, and potential improvements duly identified.

About statistics

There is no such things as “statistical facts”. Statistics are designed artifacts, made on purpose, meant to support conjectural arguments or counter questionable ones. Hence, considering statistics per se is like counting fingers when the hand points at the moon.

That consideration is especially relevant when quality metrics are concerned. Applying statistical estimation can help to reduce risks and increase confidence levels by optimizing the use of limited resources. For that purpose estimators will have to be accurate, precise, sufficient and complete (I’m taking heed of a contribution by Sriram Mahalingam):

  1. Unbiased: the sample used as a basis must correctly represent the targeted population regarding organizational and technical context, requirements patterns, application life-cycle, etc.
  2. Sufficient: the size of the sample and the scope of data must be enough to rub out margins of error associated with the collection of data (aka efficiency).
  3. Efficient: data must be collected with accuracy (closeness of measurements of a quantity to its actual value) and precision (repeated measurements under unchanged conditions must show the same results).
  4. Consistent: outcomes must be fully predictable from the state of endogenous (i.e selected) factors, whatever the status of exogenous (i.e not taken into account) ones.

Those are technical provisions, necessary but not sufficient because estimators are useless without their scope and objectives being properly defined. Yet regression analysis can provide sound estimators when combined with patterns of development complexity.

Function Points Revisited

From requirements to ROI, software metrics are based upon function points, whether directly or indirectly. As comprehensively explained by Alain Abran (Software Metrics & Software Metrology),  they are usually obtained from a confusing mix of inputs and measurements.

  • Different kind of inputs: business contents, system functionalities, general systems characteristics.
  • Different kind of measurements: objective tally, subjective guesses, and statistical regressions.

Yet things could be different were those elements reorganized:

  • The complexity of a business domain should be measured independently of the systems that may support its business processes.
  • For a given business process, one should be able to assess and compare different level of system functionalities.
  • Finally, it should be possible to adjust a given level of functionalities depending on regulatory or operational constraints (aka non functional requirements, aka general system characteristics).
Function Points Revisited

Seen from a broader perspective, functional size measurements can be put on a par with MDA model layers:

  • Computation Independent Models (CIMs) come with intrinsic complexity stemming from objects and transactions specific to business domains. Whereas that complexity may be compounded by the way systems have to support the processes, they nonetheless should be measured on their own account if only to manage requirements portfolio accordingly.
  • Platform Independent Models (PIMs) describe how systems support business processes and their complexity should be measured accordingly. That may be critical when different options are to be considered.
  • The same reasoning should be applied for Platform Specific Models (PSMs) as different candidates are often to be considered for the same functional architecture.

Further Reading

External Links

5 thoughts on “Measurements”

  1. I like the three core rationales – predictive, preventive and corrective. I think when we combine this with the other way of classification (productivity metrics, quality metrics, people metrics, infrastructure metrics, etc.,) we will ensure the right combination of metrics. Thanks!

  2. Clearly, killing the patient will get rid of the pathology. Yet, since process assessment and software economics cannot be set aside, metrics flaws must be dealt with.

  3. COCOMO is a Mulligan’s stew of measures.

    Modern software development, of which 99% are projects of 10 man/years or less, do not require software metrics because the overhead of estimation and measurement outweighs the benefit.

    Furthermore there is a whole family of pathologies that arise out of the institutional use of metrics, of which a small sample are covered in Douglas Hoffman’s excellent paper: ‘The Darker Side of Metrics’.

    For the small minority of large-scale software development efforts, software metrics may possibly be useful, but I would want to see case studies that (within statistically significant bounds) showed the benefit outweighed the costs.

    Having had 5 years experience on the largest software project in the world, which made extensive use of software metrics, they helped not one iota in staving off its inevitable failure, or even giving useful early warning. YMMV.

  4. Given the stunning variance of estimates between function points and LOCs, I’m not sure COCOMO qualifies as a professional approach to software metrics.

  5. I heartily agree, in fact the key to managing a software project is to understand and take steps to reduce the variance of the estimate. To that end my graduate student is studying a Bayesian approach to calculate the variance of the twenty or more random variables needed to understand the productivity of a large (25 or more developers.) If you would like a copy when it is done send me your email.

    It is very arrogant to think that anybody, without proper training, can make an estimate. I teach the use of function points and COCOMO and then urge my students to hire a professional software estimator similar to the legal requirements of Civil Engineers. Too bad the law ignores us software types. The benefit of a professional estimator is not only knowing the process but in addition having the experience of similar tyes of projects, looking horizontal across the project to find any duplicate code, any dead code, blatant violations of local coding standards and gaining a realistic understanding of the status of the project. Professional trained and certified software estimators are a vital addition to the process.

Leave a Reply

%d bloggers like this: