Preamble
Quality is the probability that something will go amiss, which could come from unsound specifications, flawed realization, or unexpected circumstances.

While unexpected circumstances are prone to unmanageable hazards, that’s not the case for specifications and their realization; a comprehensive and formalized documentation of tests should therefore be an intrinsic component of quality assurance.
QA & Documentation
Whatever their nature (textual requirements, formal specifications, models, source code, etc), documented specifications should satisfy four basic criteria:
- Understanding: expressed with the languages and standards of the parties involved.
- Completeness: listing a finite set of clearly defined targets.
- Consistency: support checks for ambiguous or contradictory assertions, and traceability between specifications (problems) and realizations (solutions).
- Correctness: support checks for feasibility, reliability, usability, maintainability, etc.

Understanding and completeness can be assessed independently of contents, consistency and correctness are to be qualified with regard to scope and concerns.
Enterprise Architecture: Scope & Concerns
Labels may differ but there is a broad consensus about three core architectural tiers (or levels, or layers):
a. Objects and activities according to domains of concerns.
b. Specification of symbolic representations independently of domains or implementations (e.g Relational, Logic, etc)
C. Specification of symbolic representations depending on their implementations (e.g DB, programming languages, etc)
The MDA framework offers a good example with its Computation independent, Platform independent, and Platform specific models. A comprehensive and structured catalog of deliverables can then be obtained by crossing these layers with systems capabilities, e.g using Zachman’s architecture framework:
That can then be used to define and manage the scope and concerns of quality assurance at enterprise level:
- Business objects and processes are correctly and consistently defined (a)
- Functional architecture is aligned with enterprise organization and business objectives (b)
- Systems capabilities can support operational requirements (c)
- Users’ stories are properly developed (d)
- Artifacts are properly developed (e)
- System functions are properly implemented (f)

Validation schemes will then depend on target (code or model) and development process (iterative or phased).
Assessing Deliverables
Tests on deliverables are usually defined with regard to purpose:
Unit tests deal with technical aspects of developed components independently of functional integration (d).

Integration tests deal with the design of components with regard to system architecture independently of their business use (e).

Function tests deal with the design of systems functionalities with regard to supported business processes (b).

Performance tests deal with system capabilities with regard to non-functional requirements (c).

Acceptance tests deal with system capabilities with regard to functional and non-functional requirements set in test environment.

Installation tests deal with the specific resources and procedures associated with setting the product in its operational context.

Operational tests deal with system functionalities in actual environment.

Finally, usability tests deal with ergonomy and maintenance.

Depending on development process, quality management will go for the continuous and combined assessment of components (agile projects) or staged and specific assessment of differentiated deliverables (phased projects).
Continuous Tests Policy
Iterative approaches rely on shared responsibility and continuous delivery, which means that quality is part and parcel of development: instead of being an afterthought it is a built-in property of deliverables. Yet, as continuity of intent doesn’t by itself guarantee continuity of outcome, some quality “attitude” is needed if regression is to be avoided.
That will begin with requirements capture: depending on the specificity and formal properties of domain’s language, quality could be achieved through some normalization of users’ stories (non specific languages) or automated checks on inputs (formal languages).
Tests could then be carried out iteratively on increments providing the prioritization (e.g through backlogs) takes into account the nature of requirements:
- Consistency and correctness of business requirements (business objects and logic).
- Alignment and feasibility of functional requirements (bound to users interactions with systems), to be further refined for non shared (local), shared but transient (transactions), and shared and persistent (domains).
- Quality of service with regard to systems capabilities.
Weaving these differentiated tests with development threads would ensure non regression.
Phased Tests Policy
Set at enterprise level, quality assurance is by nature phased because projects set across architecture layers are usually carried out along different time-scales, which entails intermediate outcomes.
For phased projects without external dependencies (see “d” above), intermediate outcomes can still be software, to be tested recursively for units, integration, and acceptance:

That will not be possible if cross dependencies are to be managed between phases; in that case models will be necessary for intermediate outcomes, calling for specific validation schemes.
Models Validation
Assuming formal syntax, models validation is to depend on purpose and semantics:
- Purpose: the critical distinction is between extensional and intensional models, the former describing a subset of actual objects and activities, the latter specifying a subset of shared structures, functions, and mechanisms.
- Semantics: the critical distinction is between domain specific and formal languages.
Taking the MDA framework for example:
Computation independent models (CIM) are extensional and are meant to describe specific domains using specific semantics.
Platform independent models (PIM) are intensional and are meant to specify systems architectures using formal semantics.
Platform specific models (PSM) are intensional and are meant to specify software components using formal semantics.

Being extensional, the validation of computation independent models is to focus on:
- Comprehensive and exclusive identification of individuals (objects or activities)
- Consistency of the semantics used to define the aspects to be taken into account.
Their correctness cannot be proven true but only be checked for falsification.
Being intensional, platform independent (PIM) and specific (PSM) models can be specified with formal semantics and consequently be subjected to more formal validation.