Thread: Functional Measurements

(Links at bottom)
(Links at bottom)

Measurements

As physicists will tell, measurements are not facts but observations obtained through conceptual and physical apparatus built on purpose. With regard to software engineering four main purposes are to be addressed:

  1. Business value of new applications.
  2. Functionalities of supporting systems.
  3. Size and complexity of software products.
  4. Development costs.

While the first set of metrics clearly depends on the idiosyncrasies of business models and enterprise organization, the others are supposed to relate to commonly understood software engineering concerns:

Metrics

  • Business domains can be measured by the number of entities weighted by complexity (taxonomies and dependencies). Business processes can be measured by the number of transactions weighted by the complexity of execution paths (extension points and dependencies).
  • Metrics of system functionalities are set along two axes: the first one is defined by their business footprint, i.e entities and transactions supported by the system (as defined by use cases); the second one deal with the nature of system boundaries for users, events, or devices.
  • Functional measurements must be adjusted for transverse (aka non functional) requirements, i.e operational or regulatory constraints that cannot be allocated to any specific business requirements.

Estimates for development costs will take into account technical and human resources as well as organizational contexts and constraints.

Finally, engineering measurements and business valuations will be combined into enterprise level estimators:

  • Return on investment (ROI), based on business value and costs.
  • Business processes assessment, based on business value and systems functionalities.
  • Engineering processes assessment, based on systems functionalities and development costs.

Business Requirements

Since projects begin with requirements, decisions about targeted functionalities and resources commitments are necessarily based upon estimations made at inception. Yet at such an early stage little may be known about the size and complexity of the components to be developed, hence the importance of the distinction between business requirements (domains and processes) and systems supporting functionalities (represented by use cases).

RekWeaverLayers

The first step is to assess the intrinsic size and complexity of business domains and processes independently of system functionalities; that can be done by estimating the size and complexity of the symbolic representations that will have to be managed by the supporting system:

  1. A skeleton is footprint is built from directly identified (aka primary) objects and activities, and the associated partitions (objects classifications or activities variants).
  2. Objects and activities identified through primary objects are the added to the skeleton.
  3. The symbolic representations of primary and secondary objects and activities are fleshed out with features (attributes or operations) defined within semantic domains.

Size and structure can then be estimated by:

  • Average number of artifacts and partitions by domain.
  • Total number of secondary objects and activities relative to primary ones.
  • Average and maximum depth of secondary identification.
  • Total number of primary activities relative to primary objects
  • Total number of features (attributes and operations) relative to number of artifacts.
  • Ratio of local features (defined at artifact level) relative to shared (defined at domain level) ones.

Intrinsic complexity can be objectively assessed using partitions:

  • Total number of activity variants relative to object classifications.
  • Total number of exclusive partitions relative to primary artifacts, respectively for objects and activities.
  • Percentage of activity variants combined with object classifications.
  • Average and maximum depth of cross partitions.

It must be noted that whereas those ratios do not depend of any modeling method, they can nonetheless be used to assess requirements or refactor them according to specific methods, patterns, or practices.

Functional Requirements

Functional size measurement is the corner-stone of software economics, from portfolio management to project planning, bench-marking, and ROI assessment. And since software has no physical dimension, relevant metrics can only be based on the expected functionalities of supporting systems.

Since the seminal work of Allan Albrecht more than 30 years ago, most approaches have been based upon his definition of function points. Yet, as thoroughly explained by Alain Abran  (“Software Metrics & Software Metrology”), Albrecht original design was undermined by some discrepancies, in particular:

  • Mixed inputs: business contents, system functionalities, general systems characteristics.
  • Mixed procedures: objective tally, subjective guesses, and statistical regression.

Yet, those flaws can be addressed by introducing a clear distinction between the dimensions of problems and solutions:

Assuming the part played by supporting systems is described by use cases, functional metrics may include:

  • Interactions with users, identified by primary roles and weighted by activities and flows (a).
  • Access to business (aka persistent) objects, weighted by complexity and features (b).
  • Control of execution, weighted by variants and couplings (c).
  • Processing of objects, weighted by variants and features (d).
  • Processing of actual (analog) events, weighted by features (e).
  • Processing of physical objects, weighted by features (f).

Metrics_FReks

Assessments based on intrinsic metrics (business logic and supporting functionalities) are then to be adjusted for usage, operational, and technical constraints.

Adjustments

Function points are supposed to be weighted by a value adjustment factor (VAF) computed from 14 general system characteristics (GSCs). But that procedure is seldom used, the main reason being that it compounds the shortcomings of the primary computation:

  • The fuzziness of initial guesses is increased by guesses about general characteristics.
  • Those “general” characteristics lump together different kinds of concerns: business (e.g complex processing), functional  (e.g end-user efficiency), operational (distributed locations, maintainability), or development effort (e.g reusability).

The remedy to those drawbacks is to chart system characteristics according requirements taxonomy.

Assuming that primary function points target functional requirements, the so-called “non functional” requirements cover a wide range of constraints that may be regrouped in two categories, depending on their nature:

  • Operational and quality of service requirements relate to the way supporting systems are operated and used, independently of specific business contents.
  • Technical requirements relate to the way supporting systems are implemented, assuming other requirements are satisfied.

Assuming moreover that technical requirements are set independently of  operational and quality of service requirements, they should not be taken into account for functional adjustment. That’s not always the case for constraints on quality of service and operations as they may entail some arbitrage against functionalities, as illustrated by ease of use, security, response time, compliance, etc. In order to sort out adjustment factors, those constraints will have to be examined depending on their target.

Function Points are estimated at artifact level and adjusted at architecture level
Function Points are estimated at artifact level and adjusted at architecture level

Functional requirements targeting identified artifacts in business domains, activities, and users’ transactions can be directly measured using function points (respectively DFP, AFP, and UCFP).

Constraints (aka non functional requirements) targeting specific domains (e.g privacy) or activities (e.g transaction rate) should be either incorporated into primary function points (when translated into adjusted functionalities), or ignored (when taken care of at operational level).

Operational requirements are set at enterprise level but are meant to be supported by technical architectures. As a consequence the corresponding adjustment factors should not be introduced until development effort is considered.

Agile Measurements

Due to their progressive approach, Agile development models have no use of up-front requirements metrics; moreover, since they are not positioned along use cases, there is no clear distinction between business and functional requirements. All that put stringent limits on the practicality of function points. Instead, estimations of development effort are progressively refined based on:

  • Story points, which mix business and functional requirements as understood from users stories.
  • Actual development outcome, updated dynamically according to metrics computed on iterations and backlog, e.g burndown and velocity.

Given that such metrics essentially depend on heuristics and experience, agile development processes will have to be assessed within a broader framework.

Processes Assessment

At the end of the day metrics for products and projects are to be used to assess and improve engineering processes. For that purpose, SEI’s CMMI (Capability Maturity Model Integration ) defines five levels:

  1. Initial: No process. Each project is managed on an ad hoc basis.
  2. Managed: Processes are specific to projects.
  3. Defined: Processes are set for the whole organization and shared across projects.
  4. Measured: Processes are measured and controlled.
  5. Optimized: Processes are assessed and improved.

Whatever the volume of data  and the statistical tools employed, the relevance of process assessment fully depends on (1) objectives and unbiased indicators measuring projects performances and, (2) transparent mapping between organizational alternatives and process outcomes. That can only be achieved if assessments are clearly defined for products, projects, and processes:

  • Traceability: from requirements to deliverables (products), work units (projects), and tasks (processes).
  • Measurements: functional (products), effort (projects), performance (processes).
  • Quality: verification & validation (products), tests plan and risks management (projects), quality assurance (processes).
  • Reuse: artifacts and patterns (products), profiles (projects), organization (processes).
  • Management: model driven engineering (products), planning and monitoring (projects), maturity assessment (processes).

CMMI_3p

Whatever the volume of data and the statistical tools employed, the whole of the process assessment pyramid clearly depends on the measurements of processes outcomes and their traceability to organizational alternatives.

Further Reading

External Links

Leave a Reply