Specifications QA


Quality is the probability that something will go amiss, which could come from unsound specifications, flawed realization, or unexpected circumstances.

Better Prevent than Amend (Rene Magritte)

While unexpected circumstances are prone to unmanageable hazards, that’s not the case for specifications and their realization; a comprehensive and formalized documentation  of tests should therefore be an intrinsic component of quality assurance.

QA & Documentation

Whatever their nature (textual requirements, formal specifications, models, source code, etc), documented specifications should satisfy four basic criteria:

  • Understanding: expressed with the languages and standards of the parties involved.
  • Completeness: listing a finite set of clearly defined targets.
  • Consistency: support checks for ambiguous or contradictory assertions, and traceability between specifications (problems)  and realizations (solutions).
  • Correctness: support checks for feasibility, reliability, usability, maintainability, etc.
Quality criteria for specifications (external) & realizations (internal)

Understanding and completeness can be assessed independently of contents, consistency and correctness are to be qualified with regard to scope and concerns.

Enterprise Architecture: Scope & Concerns

Labels may differ but there is a broad consensus about three core architectural tiers (or levels, or layers):
a. Objects and activities according to domains of concerns.
b. Specification of symbolic representations independently of domains or implementations (e.g Relational, Logic, etc)
C. Specification of symbolic representations depending on their implementations (e.g DB, programming languages, etc)
The MDA framework offers a good example with its Computation independent, Platform independent, and Platform specific models. A comprehensive and structured catalog of deliverables can then be obtained by crossing these layers with systems capabilities, e.g using Zachman’s architecture framework:


That can then be used to define and manage the scope and concerns of quality assurance at enterprise level:

  • Business objects and processes are correctly and consistently defined (a)
  • Functional architecture is aligned with enterprise organization and business objectives (b)
  • Systems capabilities can support operational requirements (c)
  • Users’ stories are properly developed (d)
  • Artifacts are properly developed (e)
  • System functions are properly implemented (f)
Overview of validation (blue for models, brown for code)

Validation schemes will then depend on target (code or model) and development process (iterative or phased).

Assessing Deliverables

Tests on deliverables are usually defined with regard to purpose:

Unit tests deal with technical aspects of developed components independently of functional integration (d).

Unit tests can be performed in isolation

Integration tests deal with the design of components with regard to system architecture independently of their business use (e).

Integration tests check if the whole works properly whatever its use.

Function tests deal with the design of systems functionalities with regard to supported business processes (b).

Function tests check uses without users

Performance tests deal with system capabilities with regard to non-functional requirements (c).

Performance tests check for resources

Acceptance tests deal with system capabilities with regard to functional and non-functional requirements set in test environment.

Acceptance tests check with users within protected environments

Installation tests deal with the specific resources and procedures associated with setting the product in its operational context.

Installation tests check how to set products in their operational context.

Operational tests deal with system functionalities in actual environment.

Operational tests check functionalities in actual environments

Finally, usability tests deal with ergonomy and maintenance.

Maintenance, ergonomy, and evolution.

Depending on development process, quality management will go for the continuous and combined assessment of components (agile projects) or staged and specific assessment of differentiated deliverables (phased projects).

Continuous Tests Policy

Iterative approaches rely on shared responsibility and continuous delivery, which means that quality is part and parcel of development: instead of being an afterthought it is a built-in property of deliverables. Yet, as continuity of intent doesn’t by itself guarantee continuity of outcome, some quality “attitude” is needed if regression is to be avoided.

That will begin with requirements capture: depending on the specificity and formal properties of domain’s language, quality could be achieved through some normalization of users’ stories (non specific languages) or automated checks on inputs (formal languages).

Tests could then be carried out iteratively on increments providing the prioritization (e.g through backlogs) takes into account the nature of requirements:

  1. Consistency and correctness of business requirements (business objects and logic).
  2. Alignment and feasibility of functional requirements (bound to users interactions with systems), to be further refined for non shared (local), shared but transient (transactions), and shared and persistent (domains).
  3. Quality of service with regard to systems capabilities.

Weaving these differentiated tests with development threads would ensure non regression.

Phased Tests Policy

Set at enterprise level, quality assurance is by nature phased because projects set across architecture layers are usually carried out along different time-scales, which entails intermediate outcomes.

For phased projects without external dependencies (see “d” above), intermediate outcomes can still be software, to be tested recursively for units, integration, and acceptance:

Tests should be aligned on architecture layers

That will not be possible if cross dependencies are to be managed between phases; in that case models will be necessary for intermediate outcomes, calling for specific validation schemes.

Models Validation

Assuming formal syntax, models validation is to depend on purpose and semantics:

  • Purpose: the critical distinction is between extensional and intensional models, the former describing a subset of actual objects and activities, the latter specifying a subset of shared structures, functions, and mechanisms.
  • Semantics: the critical distinction is between domain specific and formal languages.

Taking the MDA framework for example:

Computation independent models (CIM) are extensional and are meant to describe  specific domains using specific semantics.

Platform independent models (PIM) are intensional and are meant to specify systems architectures using formal semantics.

Platform specific models (PSM) are intensional and are meant to specify software components using formal semantics.

Overview of continuous (applications) and phased (MDA) validation

Being extensional, the validation of computation independent models is to focus on:

  • Comprehensive and exclusive identification of individuals (objects or activities)
  • Consistency of the semantics used to define the aspects to be taken into account.

Their correctness cannot be proven true but only be checked for falsification.

Being intensional, platform independent (PIM) and specific (PSM) models can be specified with formal semantics and consequently be subjected to more formal validation.

Further Readings