Knowledge Driven Test Cases

Preamble

The way tests are designed and executed is being doubly affected by development methods and AI technologies. On one hand well-founded approaches (e.g test-driven development) are often confined to faith-based niches; on the other hand automated schemes and agile methods push many testers out of their comfort zone.

Reality check (Kader Attia)

Resetting the issue within a knowledge-based enterprise architecture would pave the way for sound methods and could open new doors for their users.

outline

Tests can be understood in terms of preventive and predictive purposes, the former with regard to actual products, the latter with regard to their forthcoming employ. On that account policies are to distinguish between:

  • Test plans, to be derived from requirements.
  • Test cases, to be collected from environments.
  • Test execution, to be run and monitored in simulated environments.

The objective is to cross these pursuits with knowledge architecture layers.

Test plans are derived from business requirements, either on their own at process level (e.g as users stories or activities), or combined with functional requirements at application level (e.g as use cases). Both plans describe sequences of actions meant to be performed by organizational entities identified at enterprise level. Circumstances are then specified with regard to quality of service and technical requirements.

Mapping tests to knowledge architecture layers.

Test cases’ backbones are built from business scenarii fleshed out with instances mimicking identified entities from business environment, and hypothetical decisions taken by entitled users. The generation of actual instances and decisions could be automated depending on the thoroughness and consistency of business requirements.

To be actually tested, business scenarii have to be embedded into functional ones, yet the distinction must be maintained between what pertains to business logic and what pertains to the part played by supporting systems.

By contrast, despite being built from functional scenarii, integration and acceptance ones are meant to be blind to business or functional contents, and cases can therefore be generated independently.

Unit and components tests are the building blocks of all test cases, the former rooted in business requirements, the latter in functional ones. As such they can be used to a built-in integration of tests and development.

TDD in the loop

Whatever its merits for phased projects, the development V-model suffers from a structural bias because flaws rooted in requirements, arguably the most damaging, tend to be diagnosed after designs are encoded. Test driven development (TDD) takes a somewhat opposite approach as code specifications are governed by testability. But reversing priorities may also introduce reverse issues: where the V-model’s verification and validation come too late and too wide, TDD’s may come hasty and blinkered, with local issues masking global ones. Applying the OODA (Observation, Orientation, Decision, Action) loop to test cases offers a way out of the dilemma:

  • Observation (West): test and assessment for component, integration, and acceptance test cases.
  • Orientation (North): assessment in the broader context of requirements space (business, functional, Quality of Service), or in the local context of application (East).
  • Decision (East): confirm or adjust the development paths with regard to functional scenarii, development backlog, or integration constraints.
  • Action (South): develop code at unit, component, or process levels.
From V to O: How to iterate across layers

As each station is meant to deal with business, functional, and operational test cases, the challenge is to ensure a seamless integration and reuse across iterations and layers.

managing Tests cases

Whatever the method, tests plans are meant to mirror requirements scope and structure. For architecture oriented projects, tests should be directly aligned with the targeted capabilities of architecture layers:

For architecture oriented projects test plans should be set with regard to capabilities.

For business driven projects, test plans should be set along business scenarii, with development units and associated test cases defined with regard to activities. When use cases, which cover the subset of activities supported by systems, are introduced upfront for both business and functional requirements, test plans should keep the distinction between business and functional requirements.

Stories (bottom left), activities (right), use cases (top left).

All things considered, test cases are to be comprehensively and consistently run against requirements distinct in goals (business vs architecture), layers (business, functions, platforms), or formalism (text, stories, use cases, …).

In contrast, test cases are by nature homogeneous as made of instances of objects, events, and behaviors; ontologies can therefore be used to define and manage these instances directly from models. The example below make use of instances for types (propulsion, body), car model (Renault Clio), and car (58642).

Ontologies can then be used to manage test cases as instances of activities (development tests) or processes (integration and acceptance tests):

Test cases as instances of processes (integration) and activities (development)

The primary and direct benefit of representing test cases as instances in ontologies is to ensures a seamless integration and reuse of development, integration, and acceptance test cases independently of requirements context.

But the ontological approach have broader and deeper consequences: by defining test cases as instances in line with environment data, it opens the door to their enrichment through deep-learning.

knowledgeable test cases

Names may vary but tests are meant to serve a two-facet objective: on one hand to verify the intrinsic qualities of artefacts as they are, independently of context and usage; on the other hand to validate their features with regard to extrinsic circumstances, present or in a foreseeable future.

That duality has logical implications for test cases:

  • The verification of intrinsic properties can be circumscribed and therefore by carried out based on available information, e.g: design, programing language syntax and semantics, systems configurations, etc.
  • The validation of functional features and behaviors is by nature open-ended with regard to scope and time-frame; it ensues that test cases have to rely on incomplete or uncertain information.

Without practical applications that distinction has been of little consequence, until now: while the digital transformation removes the barriers between test cases and environment data, the spreading of machine learning technologies multiplies the possibilities of exchanges.

Along the traditional approach, test cases relies on three basic sources of information:

  • Syntax and semantics of programing languages are used to check software components (a)
  • Logical and functional models (including patterns) are used to check applications designs (b).
  • Requirements are used to check applications compliance (c).
Traditional (a,b,c) and knowledge driven (d,e,f) test cases

With barriers removed, test cases as instances can be directly aligned with environment data, opening doors to their enrichment, e.g:

  • Random data samples can be mined from environments and used to deal with human instinctive or erratic behaviors. By nature knee jerks or latent behavior cannot be checked with reasoned test cases, yet they neither occur in a void but within known operational of functional or circumstances; data analytics can be used to identify these quirks (d).
  • Systems being designed artifacts, components are meant to tally with models for structures as well as behaviors. Crossing operational data with design models will help to refine and hone integration and acceptance test cases (e).
  • Whereas integration tests put the focus on models and code, acceptance tests also involve the mapping of models to business and organizational concepts. As a corollary, test cases are to rely on a broader range of knowledge: external regulations, mined from environments, or embedded in organization through individual and collective skills (f).

Given the immersion of enterprises in digital environments, and assuming representing test cases as ontological instances, these are already practical opportunities. But the real benefits of knowledge based test cases are to come from leveraging machine learning technologies across enterprise and knowledge architectures.

FURTHER READING

Focus: Individuals in Models

Preamble

Models are meant to characterize categories of given (descriptive models), designed (prescriptive models), or hypothetical (predictive models) individuals.

Xian-Lintong-terracotta-soldiers f6.jpg
Identity Matter (Terracotta Soldiers of Emperor Qin Shi Huang)

Insofar as purposes can be kept apart, the discrepancies in targeted individuals can be ironed out, notably by using power-types. Otherwise, e.g if modeling concerns mix business analysis with software engineering, meta-models are introduced as jack-of-all-trades to deal with mixed semantics.

But meta-models generate exponential complexity when used across domains, not to mention the open and fuzzy ones of business intelligence. What is at stake can be better understood through the way individuals are identified and represented.

Partitions & Abstraction

Since models are meant to classify instances with regard to concerns, mixing concerns is to entail mixed classifications, horizontally across domains (e.g business and accounting), or vertically along engineering cycles (e.g business and engineering).

That can be achieved with power-types, meta-classes, or ontologies.

Delegation & Power-types

Given that categories (or classes or types) represent set of instances (given, designed, or simulated), they may by themselves be regarded as symbolic instances used to manage features commonly valued by their own members.

CaKe_indivs1
Power-types are simultaneously categories and instances.

This approach is consistent as well as effective providing the semantics and identities of instances are shared by all agents concerned, e.g business and technical aspects of car rentals. Yet it falls short on both accounts when abstractions levels a set across  domains, inducing connectors with different semantics and increased complexity.

Abstraction & Sub-classes

Sub-classes often appear as a way to overcome the difficulty, as illustrated by the Hepp Research’s Vehicle Sales Ontology: despite being set at different abstraction levels, instances for cars and models are defined and identified uniformly.

CaKe_indivs2

If, in that case, the model fulfils the substitution principle for external consistency, sub-types will fall short if engineering concerns were to be taken into account because the set of individual car models could then differ depending on perspective.

Meta-classes & Stereotypes

The overuse of meta-classes and stereotypes epitomizes the escapism school of modeling. That may be understandable, if not helpful, for the former which is by nature a free pass to abstraction; less so for the latter which is supposed to go the other way toward the specialization of meta-classes according to specific profiles. It ensues that stereotypes should never be used on their own or be extended by another stereotype.

As it happens, such consistency concerns appear to be easily diluted when stereotypes are jumbled with meta constructs; e.g:

  • Abstract stereotype (a).
  • Strong (aka class) inheritance of abstract stereotype from concrete meta-class (b).
  • Weak inheritance (aka aspect) between stereotypes (c, f).
  • Meta-constraint used to map Vehicle to methodology stereotype (d).
  • Domain specific connector to stereotype (e).

CaKe_uafp

Without comprehensive and consistent semantics for instances and abstractions, individuals cannot be solidly mapped to models:

  • Individual concepts (car, horse, boat, …) have no clear mooring.
  • Actual vehicles cannot be tied to the meta-class or the abstract stereotype.
  • There is no strong inheritance tying individuals models and rental cars to an identification mechanism.

Assuming that detachment is not an option, the basis of models must be reset.

Ontologies & open concepts

Put in simple words, ontologies are meant to examine the nature and categories of existence, in general (metaphysics) or in specific contexts. As for the latter, they can be applied to individuals according to the nature of their existence (aka epistemic identity): concepts, documents, categories and aspects.

CaKe_recap

As it happens, sorting things out makes the whole paraphernalia of meta-classes and stereotypes no longer needed because the semantics of inheritance and associations can be set according to the nature of individuals.

That can be illustrated with OWL 2 and the Caminao Ontological Kernel (CaKe_WIP):

  • Categories associated to business entities managed through symbolic counterparts (_#): rental cars and vehicle models
  • Partitions (_2): structural (Body style) or functional (Rental group).
  • Instances of actual objects (Rental car: Clio58642) and categories (Model:Clio, Body style: Sedan, Propulsion: combustion). 

With individuals solidly rooted in targeted domains, models can then fully serve their purposes.

What’s the Point

It’s worth to remind that models have to be built on purpose, which cannot be achieved without a clear understanding of context and targets. Here are some examples:

Quality arguably come first:

  • External consistency: to be checked by mapping individuals in analysis categories to big data.
  • Internal consistency: to be checked by mapping individuals in design classes to observed or simulated run-time components.
  • Acceptance: automated tests generation.

Then, individuals could be used to map models to past and future, the former for refactoring, the latter for business intelligence.

Refactoring looks to the past but is frustrated by undocumented legacy code. Combined with machine learning, individuals could help to bridge the gap between code and models.

Business Intelligence looks the other way as it is meant to map hypothetical business objects and behaviors to the structures and semantics already managed by information systems. As in reversal of legacy benefits, individuals could provide cues to new business meanings.

Finally, maturity assessment and optimization of enterprise processes fully depend on the reliability of their basis.

Further Reading

Deep Blind Testing

Preamble

Tests are meant to ensure that nothing will go amiss. Assuming that expected hazards can be duly dealt with beforehand, the challenge is to guard against unexpected ones.

Unexpected Outcome (Ariel Schlesinger)

That would require the scripting of every possible outcomes in an unlimited range of unknown circumstances, and that’s where Deep Learning may help.

What to Look For

As Donald Rumsfeld once famously said, there are things that we know we don’t know, and things we don’t know we don’t know; hence the need of setting things apart depending on what can be known and how, and build the scripts accordingly:

  • Business requirements: tests can be designed with respect to explicit specifications; yet some room should also be left for changes in business circumstances.
  • Functional requirements: assuming business requirements are satisfied, the part played by supporting systems can be comprehensively tested with respect to well-defined boundaries and operations.
  • Quality of service: assuming business and functional requirements are satisfied, tests will have to check how human interfaces and resources are to cope with users behaviors and expectations which, by nature, cannot be fully anticipated.
  • Technical requirements: assuming business and functional requirements are satisfied as well as users’ expectations for service, deployment, maintenance, and operations are to be tested with regard to feasibility and costs.

Automated testing has to take into account these differences between scope and nature, from bounded and defined specifications to boundless, fuzzy and changing circumstances.

Automated Software Testing

Automated software testing encompasses two basic components: first the design of test cases (events, operations, and circumstances), then their scripted execution. Leading frameworks already integrate most of the latter together with the parts of the former targeting technical aspects like graphical user interfaces or system APIs. Artificial intelligence (AI) and machine learning (ML) have also been tried for automated test generation, yet with a scope limited by dependency on explicit knowledge, and consequently by the need of some “manual” teaching. That hurdle may be overcame by the deep learning ability to get direct (aka automated) access to implicit knowledge.

Reconnaissance: Known Knowns

Systems are designed artifacts, with the corollary that their components are fully defined and their behavior predictable. The design of technical test cases can therefore be derived from what is known of software and systems architectures, the former for test units, the latter for integration and acceptance tests. Deep learning could then mine recorded log-files in order to identify critical cases’ events and circumstances.

Exploration: Known Unknowns

Assuming that applications must be tested for use during their expected shelf life, some uncertainty has to be factored in for future business circumstances. Yet, assuming applications are designed to meet specific business objectives, such hypothetical circumstances should remain within known boundaries. In that context deep learning could be applied to exploration as well as policies:

  • Compared to technical test cases that can rely on the content of systems log-files, business and functional ones have to look outside and mine raw data from business environments.
  • In return, the relevancy of observations can be assessed with regard to business objectives, improved, and feed the policy module in charge of defining test cases.

Blind Errands: Unknown Unknowns

Even with functional and technical capabilities well-tested and secured, quality of service may remain contingent on human quirks: instinctive or erratic behaviors that could thwart the best designed handrails. On one hand, and due to their very nature, such hazards are not to be easily forestalled by reasoned test cases; but on the other hand they don’t take place in a void but within known functional circumstances. Given that porosity of functional and cognitive layers, the validity of functional test cases may be compromised by unfathomable cognitive associations, and that could open the door to unmanageable regression. Enter deep learning and its ability to extract knowledge from insignificance.

Compared to business and functional test cases, hazards are not directly related to business activities. As a consequence, the learning process cannot be guided by business and functional test cases but has to chart unpredictable human behaviors. As it happens, that kind of learning combining random simulation with automated reinforcement is what makes the specificity of deep learning.

From Non-regression to Self-improvement

As a conclusion, if non-regression is to be the cornerstone of quality management, test cases are to be set along clear swim-lanes: business logic (independently of systems), supporting systems functionalities (for shared applications), users interfaces (for non shared interactions). Then, since test cases are also run across swim-lanes, it opens the door to feedback, e.g unit test cases reassessed directly from business rules independently of systems functionalities, or functional test cases reassessed from users’ behaviors.

Considering that well-defined objectives, sound feedback mechanisms, and the availability of massive data from systems logs (internal) and business environment (external) are the main pillars of deep learning technologies, their combination in integrated frameworks could result in a qualitative leap toward self-improving automated test cases.

Further Reading

 

Quality Circles

Generally speaking, quality may refer to intrinsic properties, functional characteristics, or some external yardstick. With regard to software engineering it would mean code, users experience, and operations, each with its own specific stakeholders and criteria.

Martine Franck ladef
Circling Quality Metrics (Martine Franck)

On one side, traditional phased approaches to QA are meant to deal with those different aspects, yet they fall short when those facets are weaved together across enterprise architectures and business environments. On the other side agile quality solutions may also fail to cope with transverse business functions shared across architectures. Hence the need of a bird’s-eye view putting quality into a broader enterprise perspective.

Who Cares for Quality

Whatever the attributes considered, quality should clearly encompass actual products as well as their uses. For that purpose quality has to be assessed with regard to the requirements as expressed by business stakeholders, users, or systems engineers and administrators. Given the constraints and specificity of changing environments, objective yardsticks are of limited use and quality is often to be assessed for the lack thereof:

  • Business requirements: the product doesn’t meet expectations with regard to business contents (objects and logic).
  • Functional requirements: while the product meets business requirements, the part played by supporting systems doesn’t meet users’ expectations.
  • Quality of service: while the product meets business and functional requirements, users’ experience doesn’t meet expectations.
  • Technical requirements: while the product meets users’ expectations (business, functional, and ease of use), there are problems with deployment, maintenance, or operations.

cc
Quality is best defined with regard to requirements and checked with regard to architectures

Crossing those concerns, quality assessment has to deal with two primary challenges:

  • Since assessment at each level can be conditioned by lower levels, outcomes must be described and traced accordingly. That is to be the role of quality management.
  • Since assessment has to cover both products and their use during their shelf life, uncertainty will have to be taken into account. That is to be the role of quality assurance.

A third aspect can be added for externalities, i.e factors whose impact cannot be clearly or uniquely attributed: external risks are not under control, ergonomy cannot be accurately measured, and the assessment of ROI for processes improvement remains a matter of insight.

Quality Management & Documentation

The primary objective of quality management is to identify, define, and track the targeted outcomes and the factors deemed to affect their characteristics: contracts, products traceability, models reuse, tests, etc.

Depending on target and development model, management footprint can be defined at three levels of detail:

  • With regard to the use of products in their operational context, the focus is to be on deployed systems compared to textual specifications (a).
  • With regard to the intrinsic properties of deliverables, the focus is to be extended to software components (b).
  • When products are to be deployed in different environments, or to be maintained or modified along time, additional documentation will be necessary to trace changes to functional (c) and enterprise (d) architectures.

Assessment at each level can be conditioned by lower levels
Assessment at each level may be conditioned by lower levels

In any case (i.e with or without intermediate documentation,) traceability is to be a corner-stone of quality management:

  • Business processes with regard to business objectives, e.g how to assess insurance premiums or compute missile trajectory.
  • Code with regard to textual requirements.
  • System functionalities with regard to business processes. Use cases are widely used to describe how systems are to support business processes, and system functionalities are combined to realize use cases.
  • System components as technical implementations of functionalities targeted to different users, locations, and configurations.

And another dimension of traceability is required when quality assurance has to deal with uncertainty, risks, and decision-making.

From Management to Assurance

The objective of quality assurance is to define, carry on, and monitor operations in order to improve the characteristics concerned and reduce the probability that something will go amiss during the planned shelf life of products.

For that purpose assurance footprint and granularity must be aligned with the layers defined by quality management:

  • Integration and acceptance tests are carried out in reference to requirements on the assumption that software components have been validated.
  • Code checking and unit tests are carried out in reference to business and functional requirements on the assumption that their consistency has been checked.
  • External consistency is checked with regard to business requirements independently of functional or technical ones.
  • Internal consistency is checked with regard to functional requirements on the assumption that the business requirements (external) consistency has been checked.

Footprint & granularity of management and assurance must be congruent
Footprint & granularity of management and assurance must be congruent

Those operations, meant to deal with the quality of each layers, have to be combined with schemes of secure transformations between layers, e.g reuse, patterns or code generation. That would put quality on a sound basis were it not for externalities.

Quality Assurance & Risk Management

As already noted, QA has to take into account uncertainties and risks both external (business or technical environments) and internal (development processes). Assuming quality assurance has to include risk assessment, policies should be driven by risk acceptance levels:

  • No risk: quality assurance can be designed as to eliminate some uncertainties (e.g reuse and code generation).
  • No risk taken: whereas business and technology options are not sure bets some must be carried out regardless of what happens in the environment (e.g unexpected regulatory change or delay in critical technology). In that case QA must provide fallback solutions.
  • Managed risks: some defaults or delays can be priced and weighted by likelihood. In that case QA should monitor the risks and balance their cost (e.g resources consumption, late delivery) against the cost of preventive (e.g more systematic checks on consistency, additional staff) or corrective (e.g tests or maintenance) measures.

Quality management should be set at the nexus between risks management and quality assurance.
Quality management should be set at the nexus between risks management and quality assurance.

That will put quality management at the nexus between regulatory compliancerisks management and quality assurance.

Further Readings

Models Truth and Disproof

“If you cannot find the truth right where you are, where else do you expect to find it?”

Dōgen Zenji

Summary

Software engineering models can be regrouped in two categories depending on their target: analysis models represent business context and concerns, design ones represent systems components. Whatever the terminologies, all models are to be verified with regard to their intrinsic qualities, and validated with regard to their domain of discourse, respectively business objects and activities (analysis models), or software artifacts (design models).

(Chris Engman)
Internal & External Consistency (Chris Engman)

Checking for internal consistency is arguably straightforward as proofs can be built with regard to the syntax and semantics of modeling (or programming) languages. Things are more complicated for external consistency because hypothetical proofs would have to rely on what is known of the business domains, whose knowledge is by nature partial and specific, if not hypothetical. Nonetheless, even if general proofs are out of reach, the truth of models can still be disproved by counter examples found among the instances under consideration.

Domains of Discourse: Business vs Engineering

With regard to systems engineering, domains of discourse cover artifacts which are, by “construct”, fully defined. Conversely, with regard to business context and objectives, domains of discourse have to deal with instances whose definitions are a “work in progress”.

vvvv
Domains of Discourse: Business vs Engineering

That can be illustrated by analysis models, which are meant to translate requirements into system functionalities, as opposed to design ones, which specify the corresponding software artifacts. Since software artifacts are supposed to be built from designs, checking the consistency of the mapping is arguably a straightforward undertaking. That’s not the case when the consistency of analysis models has to be checked against objects and activities identified by business’ domains of discourse, possibly with partial, ambiguous, or conflicting descriptions. For that purpose some logic may help.

Flat Models & Logic

Business requirements describe objects, events, and activities, and the purpose of modeling is to identify those instances and regroup them into subsets built according to their features and relationships.

Building descriptions for targeted instances business objects & activities
How to organize instances of business objects & activities into subsets

As far as models make no use of abstractions (“flat” models), instances can be organized using basic set operators and epistemic (i.e relating to the degree of validation) constraints with regard to existence (m/d), uniqueness (x/o), and change (f/m):

cccc
Notation for epistemic constraints

Using the EU-Rent Car example:

  • Rental cars are exclusively and definitively partitioned according to models (mxf).
  • Models are exclusively partitioned according to rental group (mxm), and exclusively and definitively according body style (mxf).
  • Rental cars are partitioned by derivation (/) according to group and body style.

cccc
Flat model using basic set operators for exclusive (cross) and final (grey) partitions (2)

Such models are deemed to be consistent if all instances are consistently taken into account.

Flat Models External Consistency

Assuming that models backbone can be expressed logically, their consistency can be formally verified using a logical language, e.g Prolog.

To begin with, candidate subsets are obtained by combing requirements for core modeling artifacts expressed as predicates (21 for descriptions of actual objects, 121 for descriptions of actual locations, 20 for descriptions of symbolic ones, 22 for descriptions of symbolic partitions), e.g:

  • type(20, manufacturer).
  • type(21, rentalCar).
  • type(22,  carModel).
  • type(22, rentalGroup).
  • type(22,  bodyStyle).
  • type(121, depot).

Partitions and functional connectors (220 for symbolic reference, 222 for partitions, 221 for actual connection), e.g:

  • connect(222, rentalCar,carModel, mxf).
  • connect(222, carModel, group, mxm).
  • connect(222, carModel, bodyStyle,mxf).
  • connect(220, manufacturer_, carModel, manufacturer, mof).
  • connect(121, location, rentalCar, depot, mxt)

Finally, features and structures (320 for properties, 340 for operations), e.g:

  • feature(340, move_to, depot).
  • feature(320, address).
  • feature(320, location).
  • member(manufacturer,address,mom).
  • member(rentalCar,location,mxm).
  • member(rentalCar,move_to,mxm).

Those candidate descriptions are to be assessed and improved by applying them to sets of identified occurrences taken from requirements. The objective being to map each instance to a description: instance(name, term()), e.g:

  • instance(sedan,carModel(f1(F1),f2(F2))).
  • instance(coupe,carModel(f1(F1),f2(F2))).
  • instance(ford, manufacturer(f6(F6),f7(F7))).
  • instance(focus, rentalCar(f6(F6),f7(F7))).
  • instance_(manufacturer_,focus,ford).

Using a logical interpreter, validation can then be carried out iteratively by looking for counter examples that could disprove the truth of the representations:

  • All instances are taken into account: there is no instance N without instance(N,Structure).
  • Logical consistency: there is no instance N with conflicting partitioning (native and derived).
  • Completeness: there is no instance type(X,N,T(f1,f2,..)) with undefined feature fi.
  • Functional consistency: there is no instance of relation R (native and derived) without a consistent type relation(X, R, Origin, Destination, Epistemic) .

It must be noted that the approach is not limited to objects and is meant to encompass the whole scope of requirements: actual objects, symbolic representations, business logic, and processes execution.

Multilevel Models: From Partitions to Sub-types

Flat models fall short when specific features are to be added to the elements of partitions subsets, and in that case sub-types must be introduced. Yet, and contrary to partitions, sub-types come with abstractions: set within a flat model (i.e without sub-types), Car model fully describes all instances, but when sub-types sedan, coupe, and convertible are introduced, the Car model base type is nothing more than a partial (hence abstract) description.

ccc
From partition to sub-types: subsets descriptions are supplemented with specific features.

Whereas that difference may seem academic, it has direct and practical consequences when validation is considered because consistency must then be checked for every level, concrete or abstract.

LSP & External Consistency

As it happens, the corresponding problem has been tackled by Barbara Liskov for software design: the so-called Liskov substitution principle (LSP) states that if S is a sub-type of T, then instances of T may be replaced with instances of S without altering any of the desirable properties of the program.

Translated to analysis models, the principle would state that, given a set of instances, a model must keep its consistency independently of the level of abstraction considered. As a corollary, and assuming a model abides by the substitution principle, it would be possible to generalize the external consistency of a detailed level to the whole model whatever the level of abstraction. Hence the importance of compliance with the substitution principle when introducing sub-types in analysis models.

vvv
All instances must be accounted for whatever the level of abstraction

Applying the Substitution Principle to Analysis Models

Abstraction is arguably the essence of requirements modeling as its purpose is to bring specific and changing concerns under a common, consistent, and lasting conceptual roof. Yet, the two associated operations of specialization and generalization often receive very little scrutiny despite the fact that most of the related pitfalls can be avoided if the introduction of sub-types (i.e levels of abstraction) is explicitly justified by partitions. And that can be achieved by the substitution principle.

First of all, and as far as requirements analysis is concerned, sub-types should only to be introduced for specific features, properties or operations. Then, epistemic constraints can be used to tally the number of specialized instances with the number of generalized ones, and check for the possibility of functional discrepancies:

  • Discretionary (or conditional or non exhaustive) partitions (d__) may bring about more instances for the base type (nb >= ∑nbi).
  • Overlapping (or duplicate or non isolated) partitions (_o_) may bring about less instances for the base type (nb <= ∑nbi).
  • Assuming specific features, mutable (or reversible) partitions (__m) means that features may differ between level; otherwise (same features) sub-types are not necessary.

vvv
Epistemic constraints on partitions can be used to enforce the LSP

Using a Prolog-like language, the only modification will concern the syntax of predicates, with structures replaced by lists of features:

  • type(20, manufacturer,[f6,f7]).
  • type(21, rentalCar,[f5]).
  • type(22,  carModel,[f1,f2]).
  • type(22, rentalGroup,[f9]).
  • type(22,  bodyStyle,[f8]).
    • type(20, bodyStyle:sedan, [f11,f12]).
    • type(20, bodyStyle:coupe, [f13]).
    • type(20, bodyStyle:convertible, [f14]).
  • type(121, depot,[f10]).

The logical interpreter could then be used to map the sub-types to partitions and check for substitution.

Further Reading

Further Readings

EA Documentation: Taking Words for Systems

In so many words

Given the clear-cut and unambiguous nature of software, how to explain the plethora of  “standard” definitions pertaining to systems, not to mention enterprises, architectures ?

Documents and architectures, which grows on the other (Gilles Barbier).
Documents and Systems: which ones nurture the others (Gilles Barbier).

Tentative answers can be found with reference to the core functions documents are meant to support: instrument of governance, medium of exchange, and content storage.

Instrument of Governance: the letter of the law

The primary role of documents is to support the continuity of corporate identity and activities with regard to their regulatory and business environments. Along that perspective documents are to receive legal tender for the definitions of parties (collective or individuals), roles, and contracts. Such documents are meant to support the letter of the law, whether set at government, industry, or corporate level. When set at corporate level that letter may be used to assess the capability and maturity of architectures, organizations, and processes. Whatever the level, and given their role for legal tender or assessment, those documents have to rely on formal textual definitions, possibly supplemented with models.

Medium of Exchange: the spirit of the law

Independently of their formal role, documents are used as medium of exchange, across corporate entities as well as internally between their organizational units. When freed from legal or governance duties, such documents don’t have to carry authorized or frozen interpretations and assorted meanings can be discussed and consolidated in line with the spirit of the law. That makes room for model-based documents standing on their own, with textual definitions possibly set in the background. Given the importance of direct discussions in the interpretation of their contents, documents used as medium of (immediate) exchange should not be confused with those used as means of storage (exchange along time).

Means of Storage: letter only

Whatever their customary functions, documents can be used to store contents to be reinstated at a later stage. In that case, and contrary to direct (aka immediate) exchange, interpretations cannot be consolidated through discussion but have to stand on the letter of the documents themselves. When set by regulatory or organizational processes, canonical interpretations can be retrieved from primary contexts, concerns, or pragmatics. But things can be more problematic when storage is performed for its own purpose, without formal reference context. That can be illustrated by legacy applications with binary code can be accompanied by self-documented source code, source with documentation, source with requirements, generated source with models, etc.

Documentation and Enterprise Architecture

Assuming that the governance of structured social organizations must be supported by comprehensive documentation, documents must be seen as a necessary and intrinsic component of enterprise architectures and their design should be aligned on concerns and capabilities.

As noted above, each of the basic functionalities comes with specific constraints; as a consequence a sound documentation policy should not mix functionalities. On that basis, documents should be defined by mapping purposes with users across enterprise architecture layers:

  • With regard to corporate environment, documentation requirements are set by legal constraints, directly (regulations and contracts) or indirectly (customary framework for transactions, traceability and audit).
  • With regard to organization, documents have to met two different objectives. As a medium of exchange they are meant to support the collaboration between organizational units, both at business level (processes) and across architecture levels. As an instrument of governance they are used to assess architecture capabilities and processes performances. Documents supporting those objectives are best kept separate if negative side effects are to be avoided.
  • With regard to systems functionalities, documents can be introduced for procurements (governance), development (exchange), and change (storage).
  • Within systems, the objective is to support operational deployment and maintenance of software components.

Documents’ purposes and users
Documents’ purposes and users

The next step will be to integrate documents pertaining to actual environments and organization (brown background) with those targeting symbolic artifacts (blue background).

EmergA_ActSmb
Models are used to describe actual or symbolic objects and behaviors

That could be achieved with MBE/MDA approaches.

Further readings

 

Ergonomy, Fingertips Errors & Automated Testing

Objective

When interacting with systems, users do things they aren’t supposed to do and walk along irrelevant, even unthinkable, paths that can put tests designers at a loss. This apparent chink between users’ conscious self and their fingertips can be explained by the way humans assess situations and make decisions. Curtailing it is the aim of ergonomics.

Errors at fingerstips (Rembrandt)
Anatomy of Errors: from brain to fingers (Rembrandt)

Taking a leaf from A. Tversky and D. Kahneman (who received the 2002 Nobel Prize in Economics), decision-making relies on two cognitive mechanisms:

  1. The first one “operates automatically and quickly, with little or no effort and no sense of voluntary control”. It’s put in use when actual situations must be assessed and decisions taken rapidly if not instantly.
  2. The second one “allocates attention to the effortful mental activities that demand it, including complex computations”. It’s put in use when situations can be assessed with regard to past experience in order to support informed decisions making.

That distinction can be directly applied to users’ behaviors interacting with systems:

  1. Intuitive behavior: decisions are taken on the basis of the visual context and options as presented by users interfaces before taking into account underlying business contents and logic.
  2. Rational behavior: decisions are taken on the basis of business contents and logic disregarding supporting systems interfaces.

Set in context, that distinction can be put in parallel (but not confused) with the one between domain and functional requirements, the former dealing rationally with business objects and logic, the latter putting the former to use through interactions with supporting systems.

Functional requirements describe the part played by supporting systems
Functional requirements describe the part played by supporting systems

Assuming that business logic should not be contingent on supporting systems interfaces, the best option would be to test its implementation independently of users interactions; moreover, tests targeting intuitive behaviors (i.e not directly based on domain specific contents), could then be generated automatically.

Looking for Errors

Given that testing is meant to find flaws in deliverables, tests are certainly more effective when designers know what they are looking for.

For that purpose phased approaches rely on sequences of differentiated tests dealing successively with programming (unit tests), functional requirements (integration tests), and business requirements (acceptance tests).  The unfortunate downside of those policies is that the most wide-ranging flaws are the last to be looked for, with the risk of being found after cascading and costly consequences for functionalities and programs.

Phased and Iterative approaches to tests
Phased and Iterative approaches to tests

Conversely, agile approaches follow iterative policies, with each development cycle combining the definition, programming, and tests of software products. When properly implemented those policies significantly improve the early detection and correction of errors whatever their origin. Yet, since there is no explicit management of intermediate outcomes, it’s difficult to differentiate the tests according the kind of errors to look for, e.g faulty business rules implementation or flawed user interface.

Architecture driven approaches may provide an answer, with requirements unambiguously sorted out depending on their architectural footprint: business contents or system functionalities. As a corollary, tests could also be designed along the same lines, targeting business rationale or human behavior.

Errors in Mirrors

Acceptance tests being performed with regard to requirements, they should be designed along requirements taxonomy, respectively for business logic, users’ interactions, quality of services, and components implementation. Being aligned on requirements, those tests can be neatly defined with regard to closed sets of specifications, functional or otherwise.

Functional tests have to expect the unexpected
Functional tests have to expect the unexpected

But that’s not the case for users’ interactions because people behaviors are not fully predictable; hence, while tests can be systematically designed with regard to the set of users’ actions framed by business and functional requirements, there is no way to comprehensively and unambiguously check for all and every possible behavioral contingencies. That will make for three levels of functional tests:

  1. Implementation of business logic: tests should be designed directly from business requirements, independently of interactions with users.
  2. Implementation of scenarii: while interactions are defined in reference to business logic, their validation should focus on the presentation of contents and dialog control.
  3. Users exceptions: in addition to inputs validity, already checked with business logic, and users’ actions, supposedly secured by interaction scenarii, it is necessary to check that unexpected behaviors have been properly considered .

How to check that unexpected behaviors have been properly considered ?
How to check that unexpected behaviors have been properly considered ?

In other words, functional tests will have to look simultaneously for errors in software (defined with regard to a finite set of requirements), and for users’ mistakes (set in an open range of behaviors). As if tests designers were to mirror users errors in order to look for software ones. So, assuming that errors in business logic and interactions have been considered, what should still be checked, and how ?

Fingertips Errors

When faced with choices, users bank on mental maps combining graphical and business layers, with the implicit assumption that maps’ contexts and concerns are kept up to date. Those maps combine three communication mechanisms:

  • Languages, natural or specific, use syntax and semantics to define business contents, logic, and operations.
  • Icons use similarity for the visual representation of business operations or functional primitives (e.g create, delete, etc).
  • Signals use proximity to draw users’ attention to predefined events (e.g sounds for operations completion or incoming emails).

While language-based interactions are supposedly fully covered by business and functional tests, icons and signals make room for “fingertips” reactions which cannot be directly framed within business logic or functional scenarii, and therefore cannot be comprehensively checked for erroneous behaviors.

Icons and signal based communication can trigger unexpected behaviors.
Icons and signal based communication can trigger unexpected behaviors.

Yet, if instinctive reactions preclude rational considerations, decisions may be swayed by analogies and associations before being informed by the relevant business contents. To prevent that risk, test scenarii built on business logic and functional interactions should be extended in order to take into account the intuitive aspects of users’ behaviors.

Mental Maps & Automated Tests

As noted above, mental maps are built on three layers, one deep (language semantics) and two shallow (icons and signals). While the shallow layers are supposed to reference the deep one, icons and signals may induce instinctive behaviors independently of the referenced business logic. Those behaviors can be triggered by two kinds of mechanisms:

  • Analogy: users will look for similarities and familiar configurations.
  • Proximity: users will look for continuity with regard to scope and operations.

Clearly, lapses in such behaviors will normally escape tests designed for business and functional requirements; yet, by being driven by self-contained mechanisms, intuitive behaviors can be checked independently of references to business contents. And that may open the door to automated tests generation.

With regard to similarities, tests should look for possible confusion between:

  • Objects with common representation but specific features (inheritance).
  • Operations with shared semantics but different scope (polymorphism).
  • Sequences with shared operations but different timing .

With regard to proximity, tests should look for possible confusion between:

  • Objects and their parts, or between their parts (structural proximity).
  • Operations usually associated into the same activity (functional proximity).
  • Operations usually executed successively (chronological proximity).

Scripts for such tests could be generated through pattern-matching and run by wizard applications.

Further Reading

External Links

Thinking about Practices

A few preliminary words

A theory (aka model) is a symbolic description of contexts and concerns. A practice is a set of activities performed in actual contexts. While the latter may be governed by the former and the former developed from the latter, each should stand on its own merits whatever its debt to the other.

Good practice has no need to show off theory to hold sway (Demetre Chiparus)

Good practices hold sway without showing off theoretical subtext (Demetre Chiparus)

With regard to Software Engineering, theory and practice are often lumped together to be marketed as snake oil, with the unfortunate consequence of ruining their respective sways.

Software Engineering: from Requirements heads to Programs tails

While computer science deals with the automated processing of symbolic representations, software engineering uses it to develop applications that will support actual business processes; that may explain why software engineering is long on methods but rather short on theory.

Yet, since there is a requirements head (for business processes) to the programming tail (for automated processing), it would help to think about some rationale in between. Schools of thought can be summarily characterized as formal or procedural.

TheoPrati_0
How to make program tails from requirements heads

Formal approaches try to extend the scope of computing theories to functional specifications; while they should be the option of choice, their scope is curtailed by the lack of structure and formalism when requirements are expressed in natural languages.

Procedural approaches deal with the difficulty of capturing users requirements by replacing theoretical assumptions about software artifacts with guidelines and best practices for modus operandi. The fault here is that the absence of standardized artifacts makes the outcomes unyielding and difficult to reuse.

ccc
Procedural (p), formal (f), and agile (a) approaches to software development.

Pros and cons of those approaches point to what should be looked for in software engineering:

  • As illustrated by Relational theory and State machines, formal specifications can support development practice providing requirements can be directly aligned with computing.
  • As illustrated by the ill-famed Waterfall, development practices should not be coerced into one-fits-all procedures if they are to accommodate contexts and tasks diversity.

Agile answers to that conundrum have been to focus on development practices without making theoretical assumptions about specifications. That left those development models halfway, making room for theoretical complements. That situation can be clarified using Scott Ambler’s 14 best practices of AMDD:

  1. Active Stakeholder Participation / How to define a stakeholder ?
  2. Architecture Envisioning / What concepts should be used to describe architectures and how to differentiate architecture levels ?
  3. Document Continuously / What kind of documents should be produced and how should they relate to life-cycle ?
  4. Document Late / How to time the production of documents with regard to life-cycle ?
  5. Executable Specifications / What kind of requirements taxonomy should be used ?
  6. Iteration Modeling / What kind of modeling paradigm should be used ?
  7. Just Barely Good Enough (JBGE) artifacts /  How to assess the granularity of specifications ?
  8. Look Ahead Modeling / How to assess requirements complexity.
  9. Model Storming / How to decide the depth of granularity to be explored and how to take architectural constraints into account ?
  10. Multiple Models / Even within a single modeling paradigm, how to assess model effectiveness ?
  11. Prioritized Requirements / How to translate users’ value into functional complexity when there is no one-to-one mapping ?
  12. Requirements Envisioning / How to reformulate a lump of requirements into structured ones ?
  13. Single Source Information / How to deal with features shared by multiple users’ stories ?
  14. Test-Driven Design (TDD) / How to differentiate between business-facing and technology-facing tests ?

That would bring the best of two world, with practices inducing questions about the definition of development artifacts and activities, and theoretical answers being used to refine, assess and improve the practices.

Takes Two To Tango

Debates about the respective benefits of theory and practice are meaningless because theory and practice are the two faces of engineering: on one hand the effectiveness of practices depends on development models (aka theories), on the other hand development models are pointless if not validated by actual practices. Hence the benefits of thinking about agile practices.

Along that reasoning, some theoretical considerations appear to be of particular importance for good practice:

  • Enterprise architecture: how to define stakes and circumscribe organizational responsibilities.
  • Systems architecture: how to factor out shared architecture functionalities.
  • Products: how to distinguish between models and code.
  • Metrics: how to compare users’ value with development charge.
  • Release: how to arbitrage between quality and timing.

Such questionings have received some scrutiny from different horizons that may eventually point to a comprehensive and consistent understanding of software engineering artifacts.

Further Reading

External Links

Tests in Driving Seats

Objective

Contrary to its manufacturing cousin, a long time devotee of preventive policies, software engineering is still ambivalent regarding the benefits of integrating quality management with development itself. That certainly should raise some questions, as one would expect the quality of symbolic artifacts to be much easier to manage than the one of their physical counterparts, if for no other reason than the former has to check  symbolic outcomes against symbolic specifications while the latter must also to overcome the contingencies of non symbolic artifacts.

ccc
Walking Quality Hall (E. Erwitt)

Thanks to agile approaches, lessons from manufacturing are progressively learned, with lean and just-in-time principles making tentative inroads into software engineering. Taking advantage of the homogeneity of symbolic development flows,  agile methods have forsaken phased processes in favor of iterative ones, making a priority of continuous and value driven deliveries to business users. Instead of predefined sequences of dedicated tasks, products are developed through iterations regrouping definition, building, and acceptance into the same cycles. That push differentiated documentation and models on back seats and may also introduce a new paradigm by putting tests on driving ones.

From Phased to Iterative Tests Management

Traditional (aka phased) processes follow a corrective strategy: tests are performed according a Last In First Out (LIFO) framework, for components (unit tests), system (integration), and business (acceptance). As a consequence, faults in functional architecture risk being identified after components completion, and flaws in organization and business processes may not emerge before the integration of system functionalities. In other words, the faults with the more wide-ranging consequences may be the last to be detected.

Phased and Iterative approaches to tests
Phased and Iterative approaches to tests

Iterative approaches follow a preemptive strategy: the sooner artifacts are tested, the better. The downside is that without differentiated and phased objectives, there is a question mark on the kind of specifications against which software products are to be tested; likewise, the question is how results are to be managed across iteration cycles, especially if changing requirements are to be taken into account.

Looking for answers, one should first consider how requirements taxonomy can support tests management.

Requirements Taxonomy and Tests Management

Whatever the methods or forms (users’ stories, use case, functional specifications, etc), requirements are meant to describe what is expected from systems, and as such they have two main purposes: (1) to serve as a reference for architects and engineers in software design and (2) to serve as a reference for tests and acceptance.

With regard to those purposes, phased development models have been providing clearly defined steps (e.g requirements, analysis, design, implementation) and corresponding responsibilities. But when iterative cycles are applied to progressively refined requirements, those “facilities” are no longer available. Nonetheless, since tests and acceptance are still to be performed, a requirements taxonomy may replace phased steps as a testing framework.

Taxonomies being built on purpose, one supporting iterative tests should consider two criteria, one driven by targeted contents, the other by modus operandi:

With regard to contents, requirements must be classified depending on who’s to decide: business and functional requirements are driven by users’ value and directly contribute to business experience; non functional requirements are driven by technical considerations. Overlapping concerns are usually regrouped as quality of service.

ccc
Requirements with regard to Acceptance.

That distinction between business and architecture driven requirements is at the root of portfolio management: projects with specific business stakeholders are best developed with the agile development model, architecture driven projects set across business domains may call for phased schemes.

That requirements taxonomy can be directly used to build its testing counterpart. As developed by D. Leffingwell (see selected readings), tests should also be classified with regard to their modus operandi, the distinction being between those that can be performed continuously along development iterations and those that are only relevant once products are set within their technical or business contexts. As it happens, those requirements and tests classifications are congruent:

  • Units and component tests (Q1) cover technical requirements and can be performed on development artifacts independently of their functionalities.
  • Functional tests (Q2) deal with system functionalities as expressed by users (e.g with stories or use cases), independently of operational or technical considerations.
  • System acceptance tests (Q3) verify that those functionalities, when performed at enterprise level, effectively support business processes.
  • System qualities tests (Q4) verify that those functionalities, when performed at enterprise level, are supported by architecture capabilities.

Tests Matrix for target and MO (adapted from D. Leffingwell)
Tests Matrix for target and MO (adapted from D. Leffingwell).

Besides the specific use of each criterion in deciding who’s to handle tests, and when, combining criteria brings additional answers regarding automation: product acceptance should be performed manually at business level, preferably by tools at system level; tests performed along development iterations can be fully automated for units and components (black-box), but only partially for functionalities (white-box).

That tests classification can be used to distinguish between phased and iterative tests: the organization of tests targeting products and systems from business (Q3) or technology (Q4) perspectives is clearly not supposed to be affected by development models, phased or iterative, even if resources used during development may be reused. That’s not the case for the organization of the tests targeting functionalities (Q2) or components (Q1).

Iterative Tests

Contrary to tests aiming at products and systems (Q3 and Q4), those performed on development artifacts cannot be set on fixed and well-defined specifications: being managed within iteration cycles they must deal with moving targets.

Unit and components tests (Q1) are white-box operations meant to verify the implementation of functionalities; as a consequence:

  • They can be performed iteratively on software increments.
  • They must take into account technical requirements.
  • They must be aligned on the implementation of tested functionalities.

ccccc
Iterative (aka development) tests for technical (Q1) and functional (Q2) requirements.

Hence, if unit and component tests are to be performed iteratively, (1) they must be set against features and, (2) functional tests must be properly documented and available for reuse.

Functional tests (Q2) are black-box operations meant to validate system behavior with regard to users’ expectations; as a consequence:

  • They can be performed iteratively on software increments.
  • They don’t have to take into account technical requirements.
  • They must be aligned on business requirements (e.g users’ stories or use cases).

Assuming (see previous post) a set of stories (a,b,c,d) identified by alternative paths built from features (f1…5), functional tests (Q2) are to be defined and performed for each story, and then reused to test the implementation of associated features (Q1).

cccc
Functional tests are set along stories, units and components tests are set along features.

At that point two questions must be answered:

  • Given that stories can be changed, expanded or refined along development iterations, how to manage the association between requirements and functional tests.
  • Given that backlogs can be rearranged along development cycles according to changing priorities, how to update tests, manage traceability, and prevent regression.

With model-driven approaches no longer available, one should consider a mirror alternative, namely test-driven development.

Tests Driven Development

Test driven development can be seen as a mirror image of model driven development, a somewhat logical consequence considering the limited role of models in agile approaches.

The core of agile principles is to put the definition, building and acceptance of software products under shared ownership, direct collaboration, and collective responsibility:

  • Shared ownership: a project team groups users and developers and its first objective is to consolidate their respective concerns.
  • Direct collaboration: decisions are taken by team members, without any organizational mediation or external interference.
  • Collective responsibility: decisions about stories, priorities and refinements are negotiated between team members from both sides of the business/system (or users/developers) divide.

Assuming those principles are effectively put to work, there seems to be little room for organized and persistent documentation, as users’ stories are meant to be developed, and products released, in continuity, and changes introduced as new stories.

With such lean and just-in-time processes, documentation, if any, is by nature transient, falling short as a support of test plans and results, even when problems and corrections are formulated as stories and managed through backlogs. In such circumstances, without specifications or models available as development handrails, could that be achieved by tests ?

ccc
Given the ephemeral nature of users’ stories, functional tests should take the lead.

To begin with, users’ stories have to be reconsidered. The distinction between functional tests on one hand, unit and component tests on the other hand, reflects the divide between business and technical concerns. While those concerns may be mixed in users’ stories, they are progressively set apart along iteration cycles. It means that users’ stories are, by nature, transitory, and as a consequence cannot be used to support tests management.

The case for features is different. While they cannot be fully defined up-front, features are not transient: being shared by different stories and bound to system functionalities they are supposed to provide some continuity. Likewise, notwithstanding their changing contents, users’ stories should be soundly identified by solution paths across problems space.

Paths and Features can be identified consistently along iteration cycles
Paths and Features can be identified consistently along iteration cycles.

That can provide a stable framework supporting the management of development tests:

  • Unit tests are specified from crosses between solution paths (described by stories or scenarii) and features.
  • Functional tests are defined by solution paths and built from unit tests associated to the corresponding features.
  • Component tests are defined by features and built by the consolidation of unit tests defined for each targeted feature according to technical constraints.

The margins support continuous and consistent identification of functional and component tests whose contents can be extended or updated through changes made to unit tests.

One step further, and tests can even be used to drive iteration cycles: once features and solution paths soundly identified, there is no need to swell backlogs with detailed stories whose shelf life will be limited. Instead, development processes would get leaner if extensions and refinements could be directly expressed as unit tests.

System Quality and Acceptance Tests

Contrary to development tests which are applied iteratively to programs, system tests are applied to released products and must take into account requirements that cannot be directly or uniquely attached to users stories, either because they cannot be expressed from a business perspective or because they are shared concerns and best described as features.  Tests for those requirements will be consolidated with development ones into system quality and acceptance tests:

  • System Quality Tests deal with performances and resources from the system management perspective. As such they will combine component and functional tests in operational configurations without taking into account their business contents.
  • System  Acceptance Tests deal with the quality of service from the business process perspective. As such they will perform functional tests in operational configurations taking into account business contents and users’ experience.

Development Tests are to be consolidated into Product and System Acceptance Tests
Development Tests are to be consolidated into Product and System Acceptance Tests.

Requirements set too early and quality checks performed too late are at the root of phased processes predicaments, and that can be fixed with a two-pronged policy: a preemptive policy based upon a requirements taxonomy organizing problem spaces according concerns business value, system functionalities, components designs, platforms configuration; a corrective policy driven by the exploration of solution paths, with developments and releases driven by quality concerns.

Tests & Framework

Insofar as large and complex enterprise architectures are concerned, it’s safe to assume that different development models (agile or phased) and tests policies (unit, system, acceptance, …) will have to be cohabit, and that would not be possible without an architecture framework:

  • Development or unit tests are defined at platform level and applied to software components.
  • Integration or system tests are defined at system level and built from tested components.
  • Acceptance tests are defined at enterprise level and built from tested functionalities.

Pentaheds-tests
Tests should be aligned on architecture layers

On a broader perspective such a framework is to provide the foundation of enterprise architecture workflows.

Further Reading

External Links