Functional Size Measurements

Objectives

Functional size measurement is the corner-stone of software economics, from portfolio management to project planning, bench-marking, or assessing ROI. Given that software has no physical features, relevant metrics can only be rooted in the functional value of the software under consideration, i.e functional requirements.

How to Weight Size (E. Erwitt)

Since the seminal work of Allan Albrecht more than 30 years ago, most approaches have been based upon his design of function points. Yet, as thoroughly explained by Alain Abran  (“Software Metrics & Software Metrology”), Albrecht original design was undermined by mixed inputs and procedures:

  • Different inputs: business contents, system functionalities, general systems characteristics.
  • Different procedures: objective tally, subjective guesses, and statistical regression.

Since then, and despite the radical changes in communication and information technologies, not much has been done to correct those drawbacks. And yet, new approaches focusing on architectures should open new perspectives:

  • Service oriented architectures should support objective and unbiased description of systems functionalities.
  • Model driven development should provide for a clear mapping between functionalities and development flows.

Taking advantage of both approaches, the aim is to anchor functional size metrics to stereotyped requirements and redesign function points computation along architectural layers.

Problems and Solutions

Requirements metrics are usually impacted by external factors set by different stakeholders along different time-frames. If projects are to be planned and managed accordingly, estimators should be designed along the different layers concerned.

As already noted, measurements  are tools designed on purpose. Regarding systems engineering the aim is to estimate the size and complexity of problems and solutions.

Functional size metrics deal with the problem perspective, namely how to assess the functionalities supported by a system independently of the technology and tools used to implement it. Those functionalities can only be set in their business and operational contexts, and must be assessed accordingly.

Business domains can be measured by the number of entities together with associated taxonomies and dependencies. Business processes can be measured by the number of transactions weighted by the complexity of execution paths (extension points and dependencies).

Metrics of system functionalities are set along two axes: the first one is defined by their business footprint, i.e entities and transactions supported by the system (as defined by use cases); the second one deal with the nature of system boundaries for users, events, or devices.

Finally functional measurements must be adjusted for transverse (aka non functional) requirements, i.e operational or regulatory constraints that cannot be allocated to any specific business unit.

Problem Layers and Metrics

Function Points Principles and Limitations

The standard computation of function points distinguishes between data and transaction functions. Data function points include:

  • Internal Logical files (ILFs) are symbolic objects whose life-cycle is bound to applications using them.
  • External Interface Files (EIFs) are symbolic objects used or referenced by the application yet whose life-cycle is managed independently.

Transaction function points take into account:

  • External Inputs (EIs) are activities processing information from outside application boundaries.
  • External Outputs (EOs) are activities providing derived information outside application boundaries.
  • External Inquiries (EQs) are activities providing non derived information outside application boundaries.
Stereotypes for Standard Function Points Components

While based on sound principles, standard computations still reflect traditional system architectures. As a consequence, the focus is on files size and structure while critical aspects are lumped together or relegated as adjustment factors.

Functional Requirements Metrics

Functional Points computation can be redefined as to distinguish between domain and use case metrics:

  • The footprint of supporting functionalities is marked out by roles to be supported, active physical objects to be interfaced, events to be managed, and processes to be executed. As for symbolic representations, corresponding artifacts are to be qualified as primary or secondary depending on their identification, with accuracy and reliability of metrics weighted by the completeness of qualifications.
  • Functional artifacts (objects, processes, events, and roles) are associated with anchors and features (attributes or operations) defined by business requirements.
From Business to Functional Requirements metrics

Functional metrics can then be computed for use cases:

  • Interactions with users, identified by roles and weighted by activities and flows (a).
  • Access to business (aka persistent) objects, weighted by complexity and features (b).
  • Control of execution, weighted by variants and couplings (c).
  • Processing of objects, weighted by variants and features (d).
  • Processing of actual (analog) events, weighted by features (e).
  • Processing of physical objects, weighted by features (f).

Additional adjustments are to be considered for distributed locations and synchronous execution.

Functional Complexity

Whereas complexity is clearly another primary factor when development strategies are considered, measurements are generally based upon educated guesses at best, arbitrary assessments otherwise. That may prove a fateful obstacle to the use of function points when subjective estimates bring statistical variance to disproportionate levels.

From a functional point of view, complexity depends on interconnections, their degree of coupling, and their distribution.

Those criteria can be organized within a complexity matrix and adjustment factors computed for each capability based on cross constraints (intrinsic complexity is already measured by primary function points): 

  • Locations: coupling (channels), locations with active objects, distributed storage, distributed access, real-time connections, distributed processing, distributed control.
  • Active objects: real-time updates on status, authentication, real-time synchronization, complexity of data processing, complexity of control.
  • Symbolic objects: coupling (references), authorizations, triggers (cascading updates), complexity of algorithms, knowledge based control.
  • Actors: real-time constraints on interactions, complexity of interactions contents, complexity of interactions control.
  • Events: complexity of events processing,  triggers (cascading events).
  • Business logic:  coupling (data and control flows), synchronization of operations (ACID).
  • Process execution: coupling (state transitions).
Cross constraints on architecture capabilities
Cross constraints on architecture capabilities

Functional Size Metrics & Architecture Layer

That analysis of functional complexity can be refined in order to distinguish between model and local complexity. Model complexity is set along architecture layers:

  • Coupling constraints (channel and synchronization) with processes environment: agents, devices, or other systems.
  • Boundaries: interfaces with agents, devices, and other systems (views).
  • Business logic: processing of business objects within execution units (stateless control).
  • Process control: synchronization between execution units (stateful control).
  • Business domains: shared business objects (model).
Complexity set along architecture layers
Complexity set along architecture layers

Artifact complexity is specific to individual artifacts identified at architecture level, independently of their use in models. Types complexity is the part associated with inheritance, instances complexity is the part associated with structures.

Artifacts and Models: combining types, instances, and architecture complexity.
Artifacts and Models: combining types, instances, and architecture complexity.

In addition to improved transparency and accuracy, this approach to functional complexity provides some major benefits, in particular:

  • The mapping of users’ value to functional complexity helps reasoned decision-making about requirements management, architectures, and project planing.
  • Modular assessment of complexity is a prerequisite for iterative development.
  • Modular assessment of complexity is also a prerequisite for projects combining new developments with the reuse of shared assets.

Functional Size Metrics & Model Driven Development

On that basis computations of problem and solution metrics can be sequenced in accordance with development contexts and life-cycles.

  1. Application domain function points (ADFPs) are directly computed from models.
  2. Use case function points (UCFPs) are derived from use case specifications.
  3. Non adjusted function points (NAFPs) combine use case and application domain function points.
  4. Gross adjusted function points (GAFPs) are estimated by weighting non adjusted function points with relevant transverse (aka non functional) requirements.
  5. Net adjusted function points (NAFPs) are estimated by taking into account current functional architecture.
  6. Design points are estimated by taking into account targeted technical architecture.
  7. Implementation points are estimated by taking into account development environment.
Functional Metrics Computation (dashed lines are for statistical regression)

Functional Size Metrics & Project Outsourcing

As demonstrated by the Victorian State government in Australia, functional metrics can be directly used to price outsourced projects. Combined with Agile approaches to project management, that may provide a sound non-zero sum game framework.

Further Reading

External Links

4 thoughts on “Functional Size Measurements”

  1. The logic of function points could be applied to other business areas which lack effective size metrics. One of these would be a “data point” metric for measuring the size of data bases and repositories.

    Companies own more data than software and the data has more bugs, but there are no effective metrics for either data volume or data quality.

Leave a Reply to Capers JonesCancel reply