Featured

Modeling Symbolic Representations

System modeling is all too often a flight for abstraction, when business analysts should instead look for the proper level of representation, ie the one with the best fit to business concerns.

Modeling is synchronic: contexts must be mapped to representations (Velazquez, “Las Meninas”).

Caminao’s blog (see Topics Guide) will try to set a path to Architecture Driven System Modelling. The guiding principle is to look at systems as sets of symbolic representations and identify the core archetypes defining how they must be coupled to their actual counterparts. That would provide for lean (need-to-know specs) and fit (architecture driven) models, architecture traceability, and built-in consistency checks.

This blog is meant to be a work in progress, with the basic concepts set open to suggestions or even refutation:

All examples are taken from ancient civilizations in order to put the focus on generic problems of symbolic architectures, disregarding technologies.

Symbolic representation: a primer

Original illustrations by Albert (http://www.albertdessinateur.com/) allow for concrete understanding of requirements, avoiding the biases associated with contrived textual descriptions.

System vs Enterprise Transformation

Preamble

Compared to brick and mortar ones, enterprise architectures come with two critical extensions, one for their ability to change, the other for the intertwine of material and symbolic components.

Problems & Solution Spaces (Inci Eviner)

On that account the standard systems modeling paradigm is to fall short when enterprise changes are to be carried out in digital environments.

Problems & Solutions

Whatever the target (from concrete edifices to abstract polities), models come first in architect’s toolbox. Applied to enterprise architectures, models have to fathom different kinds of elements: physical (hardware), logical (software), human (organization), or conceptual (business).

From that point, what characterizes enterprise architectures is the mingling of physical and symbolic components, and their intrinsic evolutionary nature; problems and solutions have to be refined accordingly.

With regard to time and the ability to change, problems and solutions are defined by specific and changing contexts on one hand, shared and stable capabilities on the other hand. As for symbolic components, a level of indirection should be introduced between enterprise and physical spaces:

  • At enterprise level problems are defined by business environment and objectives (aka business model), and solved by organization and activities, to be translated into processes.
  • At system level problems are defined by processes requirements, and solved by objects representation and systems functions.
  • At platform level problems are defined by functional and operational requirements, and solved by applications design and configurations.

Such layered and crossed spaces are to induce two categories of feedback:

  • Between problems and solutions spaces, represented respectively by descriptive and prescriptive models.
  • Between layers and corresponding stakeholders, according to contexts, concerns, and time-frames.

Whereas facilitating that two-pronged approach is to be a primary objective of enterprise architects, the standard modeling paradigm (epitomized by languages like UML or SysML) is floundering up and down: up as it overlooks environments and organizational concerns, down by being overloaded with software concerns.

How to Sort Means (Systems) from Ends (Business)

Extending the system architecture paradigm to enterprise is the cornerstone of enterprise architecture as it provide a principled and integrated governance framework:

  • Business strategic planning: integration of intensional and extensional representations respectively for organisation and systems and business and physical environments.
  • System architecture: integration of portfolio management and projects planning and development combining model based and agile solutions .
  • Business intelligence: integration of strategic operational decision-making.

Bringing representations of environments, organization, and systems under a common conceptual roof is critical because planing and managing changes constitute the alpha and omega of enterprise architecture; and changes in diversified and complex organizations cannot be managed without maps.

The Matter of Change

Compared to systems architectures, change is an intrinsic aspect of enterprises architectures; hence the need for a modeling paradigm to ensure a seamless integration of blueprints and evolutionary processes.

Taking example from urbanism, the objective would be to characterize the changes with regard to scope and dependencies across maps and territories. On that account, the primary distinction should be between changes confined to either territories or maps, and changes affecting both.

Confined changes are meant to occur under the architectural floor, i.e without affecting the mapping of territories:

  • Territories: local changes at enterprise (e.g organisation) or systems (e.g operations) levels not requiring updates of architecture models.
  • Maps: local changes of domains or activities not affecting enterprise or systems elements at architecture level (e.g new features or business rules).

Conversely, changes above architectural floor whether originated in territories or maps are meant to modify the mapping relationship:

  • Changes in business domains (maps) induced by changes in enterprise environments (e.g regulations).
  • Changes in operations (systems) induced by changes in activities (e.g new channels).

That double helix of organizational, physical, and software components on one hand, models and symbolic artifacts on the other hand, is the key to agile architectures and digital transformation.

FURTHER READING

How to Set Off EA Decision-making

Assuming a clear presentation of enterprise architecture, going further to persuade decision-makers to engage with EA implies a shift in the course of the presentation:

  1. Convincing arguments are logical, not visuals.
  2. Expectations are to be met with commitments.
  3. Decisions are best triggered by present and concrete opportunities.
Carpe Diem (Bruno Barbey)

Decision-making is driven by rules

Boxes and arrows may help to understand issues and support decision-making processes, but decisions are more easily triggered by rules than visuals. As far as enterprise architecture is concerned, the rule of the game is the necessary collaboration across enterprise business and systems units.

Expectations Must come with commitments

Visuals presentations help with reasoning and expectations, but decisions are made when problems (expectations) and solutions (commitments) spaces can be aligned. That should be a primary argument for enterprise architecture :

  • At enterprise level problems are defined by business environment and objectives (aka business model), and solved by organization and activities, to be translated into processes.
  • At system level problems are defined by processes requirements, and solved by objects representation and systems functions.
  • At platform level problems are defined by functional and operational requirements, and solved by applications design and configurations.

decisions are best triggered by present and concrete opportunities

Enterprise architecture is a continuous endeavor that can only be carried out through collaboration. As a corollary, decisions are to be made on a piecemeal basis on business driven projects involving architectural decisions, e.g. when business logic has to be factored out and business processes cut to an architecture backbone.

Assuming a common understanding of data models (conceptual, logical, physical), such business process backbones could support pragmatic, gradual, and integrated approaches to enterprise architecture.

FURTHER READING

How to present EA to Decision-makers

Preamble

To be brief and to the point a presentation of enterprise architecture at decision-makers is to follow three principles:

  1. Architecture is visual issue
  2. Schemas should contain between 8 and 12 unrelated concepts.
  3. Semantics should be non ambiguous and specific to the issue. 
(Bridget Riley)

To that end, the focus should present architecture as an activity and its outcome. Applied to enterprise, architecture has to deal with more than actual edifices and contexts, and encompasses business environments and objectives on one side, organization, processes, and assets on the other side. Moreover, enterprise architecture is a continuous and dynamic endeavor because change and exchanges are part and parcel of enterprises life-cycle:

  • At enterprise level, architectures are needed to map organizations and systems to changing environments.
  • At system level, architectures are needed to map changing organizations and processes to supporting systems.

Enterprise architecture can therefore be defined in terms of the maps and territories (outcome), and the management of changes across and between (activity).

Mapping Territories

Compared to its bricks and mortar counterpart, enterprises architecture is a work in progress to be carried out all along enterprises life-cycle; hence the need of maps to figure out their environments, organization, assets, and processes:

  • At enterprise level maps are meant to describe business environments on one hand, concepts, objectives, organization and processes on the other hand.
  • At operational level maps are to describe physical and digital environments on one hand, interfaces and platforms on the other hand.

Then, a third level is introduced in between to describe systems and ensure transparency and consistency with regard to territories.

Enterprise Maps & Territories

For all intents and purposes that ternary view of enterprise architectures holds true independently of labels (layers, levels, tiers, etc), or perspective (business, data, applications, technologies, etc). That genericity appears clearly when maps are aligned with traditional models hierarchy: conceptual, logical and functional, and physical.

managing changes

Taking a leaf from Stafford Beer’s view of enterprises as viable systems, their sustainability depends on a continuous and timely adaptation to their environment; that cannot be achieved without a dynamic alignment of maps and territories; hence the need for enterprise architectures to provide for:

  • Continuous and consistent attachments between identified elements in maps and territories.
  • Transparency and traceability of changes across maps and territories.

Regarding continuity, ill-famed Waterfall epitomizes the detached conception of systems engineering and its implicit assumption that requirements are enough to ensure the alignment of systems with their business environments. To be sure, the plights of Waterfall have already marked a significant rip in the detached paradigm, a rift partially patched by agile development models. But while agile, and more generally iterative schemes, are at their best for business driven application with well defined scope and ownership, they fall short for architecture oriented ones with overlapping footprint and distributed stakeholders; that’s when maps are required.

Regarding transparency, enterprise architecture misconceptions come from mirroring its physical cousin, using geometrical patterns without being specific about their constitutive elements, and consequently the dynamics of interactions.

As it happens, both flaws can be mended by generalizing to systems the well accepted view model of software architecture:

  • Logical view: design of software artifacts.
  • Process view: captures the concurrency and synchronization aspects.
  • Physical view: describes the mapping(s) of software artifacts onto hardware.
  • Development view: describes the static organization of software artifacts in development environments.

One that basis changes in systems architectures would be managed in four workshops, two focused on territories (enterprise and operations) and two on maps (domains and applications).

Mapping changes to Enterprise Architectures

Defining engineering workflows in terms of maps and territories is to provide the foundations of model based systems engineering.

FURTHER READING

Declarative Frameworks & Emerging Architectures

Preamble

Ingrained habits die hard, especially mental ones as they are not weighted down by a mortal envelope. Fear is arguably a primary factor of persistence, if only because being able to repeat something proves that nothing bad has happened before.

Live, Die, Repeat (Philippe de Champaigne)

Procedures epitomize that human leaning as ordered sequences of predefined activities give confidence in proportion to generality. Compounding the deterministic delusion, procedures seem to suspend time, arguably a primary factor of human anxiety.

Procedures are Dead-ends

From hourglasses to T.S. Elliot’s handful, sand materializes human double bind with time, between will of measurement and fear of ephemerality.

Procedures seem to provide a way out of the dilemma by replacing time with prefabricated frames designed to ensure that things can only happen when required. But with extensive and ubiquitous digital technologies dissolving traditional boundaries, enterprises become directly exposed to competitive environments in continuous mutation; that makes deterministic schemes out of kilter:

  • There is no reason to assume the permanence of initial time-frames for the duration of planned procedures.
  • The blending of organizations with supporting systems means that architectural changes cannot be carried out top-down lest the whole be paralyzed by the management overheads induced by cross expectations and commitments.
  • Unfettered digital exchanges between enterprises and their environment, combined with ubiquitous smart bots in business processes, are to require a fine grained management of changes across artefacts.

These shifts call for a complete upturn of paradigm: event driven instead of scheduled, bottom-up instead of top-down, model based instead of activity driven.

Declarative frameworks: Non Deterministic, Model Based, Agile

The procedural/declarative distinction has its origin in the imperative/declarative programming one, the principle being to specify necessary and sufficient conditions instead of defining the sequence of operations, letting programs pick the best options depending on circumstances.

Applying the principle to enterprise architecture can help to get out of a basic conundrum, namely how to manage changes across supporting systems without putting a halt to enterprise activities.

Obviously, the preferred option is to circumscribe changes to well identified business needs, and carry on with the agile development model. But that’s not always possible as cross dependencies (business, organizational, or technical) may induce phasing constraints between engineering tasks.

As notoriously illustrated by Waterfall, procedural (if not bureaucratic) schemes have for long be seen as the only way to deal with phasing constraints; that’s not a necessity: with constraints and conditions defined on artifacts, developments can be governed by their status instead of having to be hard-wired into procedures. That’s precisely what model based development is meant to do.

And since iterative development models are by nature declarative, agile and model-based development schemes may be natural bedfellows.

Epigenetics & Emerging architectures

Given their their immersion in digital environments and the primacy of business intelligence, enterprises can be seen as living organisms using information to keep an edge in competitive environments. On that account homeostasis become a critical factor, to be supported by osmosis, architecture versatility and plasticity, and traditional strategic planning.

Set on a broader perspective, the merging of systems and knowledge architectures on one hand, the pervasive surge of machine learning technologies on the other hand, introduce a new dimension in the exchange of information between enterprises and their environment, making room for emerging architectures.

Using epigenetics as a metaphor of the mechanisms at hand, enterprises would be seen as organisms, systems as organs and cells, and models (including source) as genome coded with the DNA.

According to classical genetics, phenotypes (actual forms and capabilities of organisms) inherit through the copy of genotypes and changes between generations can only be carried out through changes in genotypes. Applied to systems, it would entail that changes would only happen intentionally, after being designed and programmed into the systems supporting enterprise organization and processes.

Enterprises epigenetics and emerging architectures

The Extended Evolutionary Synthesis considers the impact of non coded (aka epigenetic) factors  on the transmission of the genotype between generations. Applying the same principles to systems would introduce new mechanisms:

  • Enterprise organization and their use of supporting systems could be adjusted to changes in environments prior to changes in coded applications.
  • Enterprise architects could use data mining and deep-learning technologies to understand those changes and assess their impact on strategies.
  • Abstractions would be used to consolidate emerging designs with existing architectures.
  • Models would be transformed accordingly.

While applying the epigenetics metaphor to enterprise mutations has obvious limitations, it nonetheless puts a compelling light on two necessary conditions for emerging structures:

  • Non-deterministic mechanisms governing the way changes are activated.
  • A decrypting mechanism between implicit or latent contents (data from digital environments) to explicit ones (information systems).

The first condition is to be met with agile and model based engineering, the second one with deep-learning.

FURTHER READING

Semantic Interoperability: Stories & Cases

Preamble

For all intents and purposes, digital transformation has opened the door to syntactic interoperability… and thus raised the issue of the semantic one.

Cooked Semantics (Urs Fisher)

To put the issue in perspective, languages combine four levels of interpretation:

  • Syntax: how terms can be organized.
  • Lexical: meaning of terms independently of syntactic constructs.
  • Semantic: meaning of terms in syntactic constructs.
  • Pragmatic: semantics in context of use.
Languages levels of interpretation

At first, semantic networks (aka conceptual graphs) appear to provide the answer; but that’s assuming flat ontologies (aka thesaurus) within which all semantics are defined at the same level. That would go against the objective of bringing the semantics of business domains and systems architectures under a single conceptual roof. The problem and a solution can be expounded taking users stories and use cases for examples.

Crossing stories & cases

Beside the difference in perspectives, users stories and use cases stand at a methodological crossroad, the former focused on natural language, the latter on modeling. Using ontologies to ensure semantic interoperability is to enhance both traceability and transparency while making room for their combination if and when called for.

Set at the inception of software engineering processes, users’ stories and use cases mark an inflexion point between business requirements and supporting systems functionalities: where and when are determined (a) the nature of interfaces between business processes and systems components and, (b) how to proceed with development models, iterative or model based.

Users’ stories are part and parcel of Agile development model, their backbone, engine, and fuel. But as far as Agile is concerned, users’ stories introduce a dilemma: once being told stories are meant to be directly and iteratively put down in code; documenting them in words would bring back traditional requirements and phased development. Hence the benefits of sorting out and writing up the intrinsic elements of stories as to ensure the continuity and consistency of engineering processes, whether directly to code, or through the mediation of use cases.

To that end semantic interoperability would have to be achieved for actors, events, and activities.

Actors & Events

Whatever architectures or modeling methodologies, actors and events are sitting on systems’ fences, which calls for semantics common to enterprise organization and business processes on one side of the fence, supporting systems on the other side.

To begin with events, the distinction between external and internal ones is straightforward for use cases, because their purpose is precisely to describe the exchanges between systems and environments. Not so for users stories because at their stage the part to be played by supporting systems is still undecided, and by consequence the distinction between external and internal events.

With regard to actors, and to avoid any ambiguity, a semantic distinction could be maintained between roles, defined by organizations, and actors (UML parlance), for roles as enacted by agents interacting with systems. While roles and actors are meant to converge with analysis, understandings may initially differ across the fence between users stories and use cases, to be reconciled at the end of the day.

Representations should support the semantic distinctions as well as trace their convergence.

That would enable use cases and users stories to share overlapping yet consistent semantics for primary actors and external events:

  • Across stories: actors contributing to different stories affected by the same events.
  • Along processes: use cases set for actors and events defined in stories.
  • Across time-frames: actors and events first introduced by use cases before being refined by “pre-sequel” users stories.

Such ontology-based representations are to support full iterative as well as parallel developments independently of the type of methods, diagrams or documents used by projects.

activities

Users’ stories and use cases are set in different perspectives, business processes for the former, supporting systems for the latter. As already noted, their scopes overlap for events and actors which can be defined upfront providing a double distinction between roles (enterprise view) and actors (systems view), and between external and internal events.

Activities raise more difficulties because they are meant to be defined and refined across the whole of engineering processes:

  • From business operations as described by users to business functions as conceived by stakeholders.
  • From business logic as defined in business processes to their realization as defined in diagram sequences.
  • From functional requirements (e.g users authentication or authorization) to quality of service.
  • From primitives dealing with integrity constraints to business policies managed through rules engines.

To begin with, if activities have to be consistently defined for both users’ stories and use cases, their footprint should tally the description of actors and events stipulated above; taking a leaf from Aristotle rule of the three units, activity units should therefore:

  • Be triggered by a single event initiated by a single primary actor.
  • Be located into a single physical space with all resources at hand.
  • Timed by a single clock controlling accesses to all resources.

On that basis, the refinement of descriptions could go according to the nature of requirements: business (users’ stories), or functional and quality of service (use cases) .

Activities (execution units) should be tally with roles, events, and location.
Use cases wrap computation independent activities into transactions.

As far as ontologies are concerned, the objective is to ensure the continuity and consistency of representations independently of modeling tools and methodologies. For activities appearing in users stories and use cases, that would require:

  • The description of activities in relation with their business background, their execution in processes, and the corresponding functions already supported by systems.
  • The progressive refinement of roles (users, devices, other systems), location, and resources (objects or surrogates).
  • An unified definition of alternatives in stories (branches) and use cases (extension points)

The last point is of particular importance as it will determine how business and functional rules are to be defined and control implemented.

Knitting semantics: symbolic representations

The scope and complexity of semantic interoperability can be illustrated a contrario by a simple activity (checking out) described at different levels with different methods (process, use case, user story), possibly by different people at different time.

The Check-out activity is first introduced at business level (process), next a specific application is developed with agile (user story), and then extended for variants according to channels (use case).

Semantic interoperability between projects, domains, and methods.

Assuming unfettered naming (otherwise semantic interoperability would be a windfall), three parties can be mentioned under various monikers for renters, drivers, and customers.

In a flat semantic context renter could be defined as a subtype of customer, itself a subtype of party. But that option would contradict the neutrality objective as there is no reason to assume a modeling consensus across domains, methods, and time.

  • The ontological kernel defines parties and actors, as roles associated to agents (organization level).
  • Enterprises define customers as parties (business model).
  • Business unit can defines renters in reference to customers (business process) or directly as a subtype of role (user story).
  • The distinction between renters and drivers can be introduced upfront or with use cases’ actors.

That would ensure semantic interoperability across modeling paradigms and business domains, and along time and transformations.

Probing semantics: metonymies and metaphors

Once established in-depth foundations, and assuming built-in basic logic and lexical operators, semantic interoperability is to be carried out with two basic linguistic contraptions: metonymies and metaphors .

Metonymies and metaphors are linguistic constructs used to substitute a word (or a phrase) by another without altering its meaning, respectively through extensions and intensions, the former standing for the actual set of objects and behaviors, the latter for the set of features that characterize these instances.

Metonymy relies on contiguity to substitute target terms for source ones, contiguity being defined with regard to their respective extensions. For instance, given that US Presidents reside at the White House, Washington DC, each term can be used instead.

Metonymy use physical or functional proximity (full line) to match extensions (dashed line)

Metaphor uses similarity to substitute target terms for source ones, similarity being defined with regard to a shared subset of features, with leftovers taken out of the picture.

Metaphor uses similarity to match descriptions

Compared to basic thesaurus operators for synonymy, antonymy, and homonymy, which are set at lexical level, metonymy and metaphor operate at conceptual level, the former using set of instances (extensions) to probe semantics, the latter using descriptions (intensions).

Applied to users stories and use cases:

  • Metonymies: terms would be probed with regard to actual sets of objects, actors, events, and execution paths (data from operations) or mined from digital environments.
  • Metaphors: terms for stories, cases, actors, events, and activities would be probed with regard to the structure and behavior of associated descriptions (intensions).

Compared to the shallow one set at thesaurus level for terms, deep semantic interoperability encompasses all ontological dimensions, from actual instances to categories, aspects, and concepts. As such it can take full advantage of digital transformation and deep learning technologies.

further reading

EA Capacity & Maturity

Preamble

Appraising enterprises capability and maturity means navigating between the fuzzy depths of business models and the marked features of supporting systems.

Keeping in touch with environment (Urs Fischer)

Business capabilities are by nature a fleeting lot to assess, considering the innate diversity and volatility of circumstances and the fact that successes belong to exception more than rule. By contrast, the assessment of systems capabilities is much easier as they can be defined in architectural terms.

Digital transformation may help to solve the dilemma by dissolving it: given the merging of systems and organization, the key success factor for enterprises is their capacity to assess changes and opportunities and adjust their processes and architectures accordingly. On that account, performances are to depend on the dynamic alignment of enterprises representations (maps) with their business and technological environments (territories).

enterprise architecture & environments

Based on a broadly accepted architecture paradigm epitomised by the Zachman framework, and figured by the Pagoda blueprint, enterprise environments and architectures are to be defined along three levels (aka tiers or layers):

Enterprise must align their maps and territories
  • Data level, for digital environment, operations, and platforms; described by physical and analytical models.
  • Information level, for systems and engineering, described by logical and functional models.
  • Knowledge level, for business objectives and organization; described by conceptual and process models.

That framework can be used to assess enterprises capacity to change independently of business specificites.

DEALING WITH CHANGES

Whereas absolute measurements are tied to valuation contexts, relative ones can be ecumenical, hence the benefit of targeting the capacity to change instead of trying to measure capacity by itself.

As far as enterprise architectures are concerned, changes can originate from business or technological environments, the former at process level, the latter at application level.

To begin with, as much change as possible should be dealt with at application level through organizational, functional, or operational adjustments, without affecting architectural assets. That is to be achieved through architectures versatility and plasticity (aka agility) .

When architectural changes are needed, their footprint can be layered in terms of Model Driven Architecture (MDA) , i.e computation independent (CIM), platform independent (PIM), and platform specific (PSM) models.

Change in Enterprise Architectures

Ideally, changes should spread top-down from computation independent models to platform specific ones, with or without affecting platform independent ones. On that account, the footprint of changes rooted in technical environment should be circumscribed to platforms adaptations and charted by platform specific models.

By contrast, changes rooted in business environment could induce changes in any or all architecture layers:

  • Platform specific (PSM), e.g when business logic is implemented by rules engines.
  • Platform independent (PIM), e.g new business functions.
  • Computation independent (CIM), e.g new business processes.

That model-based approach is to be used to define enterprises changes in terms of entropy.

Entropy & capacity to change

Change is a matter of time, especially for business, and a delicate balance is to be achieved between assessments (which improve when given time until they become redundant), and commitments (which risk missing opportunities if kept waiting for too long).

The issues can be expressed by applying the OODA (Observation, Orientation, Decision, Action) loop to system engineering:

  • Changes in business and technology environments are observed at digital (e.g data mining) or conceptual level (e.g business intelligence) (a).
  • Assessment deals with the reliability of observations as well as their meaning with regard to enterprise objectives, organization, and systems (b).
  • Policies are updated and decisions made regarding the adjustment of objectives, resources, organization, or assets (c).
  • Decisions are implemented as technical and business commitments (d).

On that basis, the capacity to change is to depend on:

  • Osmosis: quality (accuracy, reliability,…), delay, and automaticity of observations with regard to changes in environments (data mining).
  • Operational traceability across decisions, actions and observations (process mining, verification).
  • Alignment of business and digital environments (validation).
  • Consistency of architecture models (CIMs, PIMs, PSMs).
  • Value chains and lean engineering: business logic (CIMs) directly embedded in software designs (PSMs).
Capacity to Change

With the benefits of digital transformation, these dimensions should be defined in terms of information processing.

osmosis & blind spots

The generalization of digital exchanges between enterprises and their environment brings back the concept of entropy, defined by cybernetics as the quantum of energy within a system that cannot be put to use. Applied to enterprises, entropy can be understood as a blind spot on environment data, arguably a critical hindrance to their capacity to move and adjust. The primary objective should therefore to minimize that blind spot, i.e to maximize the outcome of data processing .

As digital environments bring about level data playground, to get a competitive edge enterprises have to make a better sense of it; hence the importance of a distinction between data and information, the former obtained from the environment, the latter obtained through the processing of the former. On that basis, the capacity to change, defined as the opposite of entropy, is to be determined by the way data is processed into information.

Digital osmosis, i.e the exchange of digital data between enterprises operations and environments is clearly a primary factor: inbound streams can be mined as to provide comprehensive and timely snapshots, outbound streams can be weaved into business processes, enhancing the capacity to translate decisions into action.

Digital osmosis could also bolster lean engineering with the direct integration of (digital) business logic into software (e.g using rules engines), reinforcing the shortcuts between observations and decisions whenever orientation can be avoided. That would also enhance traceability between platform specific (PSMs) and operational models, as well as the monitoring of actions and the analysis of feedbacks (process mining).

Osmosis could be easily (if not accurately) estimated with the ratio between digitized flows and the whole of data flows, with measurements weighted by delays between observations and data processing.

A taxonomy of changes

Digital and business environments are not to be confused, and there is no reason to assume that their respective changes tally. As it happens, digital transformation may reinstall organizations at the nexus of changes by providing a powerful leverage on systems; and if entropy is considered, that is to be achieved through symbolic representations, aka models.

While names may vary, a distinction is generally made between changes according to their horizon, shorter for operational or tactical ones, longer for strategic ones. As so often with quantitative classifications, that understanding has been of limited use given the diversity of time-frames across industries. The digital transformation open the door to a qualitative approach, defining changes according to the nature of of their footprint:

  • From the business perspective, it should restate the primacy of organization for the harnessing of IT benefits.
  • From the architecture perspective, it would rank assets according to “digital modality”: symbolic (information, knowledge), tangible (e.g platforms), or a combination of both (functional architecture).
Changes can be defined according to the digital modality of their footprint

Taking the strategic perspective, changes in technical or economic factors are to affect assets and processes for a large and often undetermined number of production cycles; they must consequently be set in time-frames extending beyond managed horizons (actual chronologies depend on industries specificities). Despite regular updates, supporting models and hypothesis may lose their relevance during the intervals, with the resulting entropy hampering the assessment of opportunities and policies. Such discrepancies can be circumscribed if planned organization and computation independent models (CIMs) are systematically checked against observations.

Compared to strategic ones, functional changes can be aligned with a limited number of production cycles, which irons out most of the discrepancies between maps and territories associated with strategic planning. Instead, entropy could arise from a lack of transparency and traceability between the models used to map organization and processes (CIMs) to functional (PIMs) and technical (PSMs) architectures.

Finally, operational changes can be carried out at process level, without affecting architectures. In that case entropy could be the result of a misalignment of commitments and observations granularity.

On that basis, planning and managing changes in digital and business environments should be driven by models.

ENTROPY & maturity

As defined by cybernetics, entropy can be explained by the discrepancies between environment states (aka micro-states) and their representation (macro-states). Applied to enterprises environments and architectures, it would mean territories and operations for the former, maps and policies for the latter.

On that account the maturity of an organization should be assessed with regard to its ability to manage changes within and across environments:

  • Consistency of changes within environments: between territories and operations (digital environments), between maps and policies (business environments) .
  • Validity of changes across environments: between maps and territories, and between policies and operations.
Consistency is checked within environments, validity is checked across environments

Maturity levels could then be set according to the scope of managed entropy:

  1. Digital osmosis: all exchanges with environments come with digital counterpart sorted with regard to operational data, data attached to managed information, environment data detached from managed information.
  2. Digital value chains: digital integration of business and engineering processes ensuring transparency and traceability along value chains.
  3. Organizational pivot: value chains can be redefined as to make the best of information assets.
  4. Knowledge based architectures: enterprise layers (platforms, systems, organization) are aligned with data, information, and knowledge layers, leveraging the benefits of machine learning across enterprise organization and systems.

As it’s safe to assume that scaling maturity levels across enterprise architectures can only be carried out progressively, progresses have to be backed up by a comprehensive and ecumenical repository of physical and symbolic resources and assets, and be supported by workflows combining iterative (continuous and business driven) and model-based (phased, architecture driven) engineering processes .

FURTHER READING

Knowledge Driven Test Cases

Preamble

The way tests are designed and executed is being doubly affected by development methods and AI technologies. On one hand well-founded approaches (e.g test-driven development) are often confined to faith-based niches; on the other hand automated schemes and agile methods push many testers out of their comfort zone.

Reality check (Kader Attia)

Resetting the issue within a knowledge-based enterprise architecture would pave the way for sound methods and could open new doors for their users.

outline

Tests can be understood in terms of preventive and predictive purposes, the former with regard to actual products, the latter with regard to their forthcoming employ. On that account policies are to distinguish between:

  • Test plans, to be derived from requirements.
  • Test cases, to be collected from environments.
  • Test execution, to be run and monitored in simulated environments.

The objective is to cross these pursuits with knowledge architecture layers.

Test plans are derived from business requirements, either on their own at process level (e.g as users stories or activities), or combined with functional requirements at application level (e.g as use cases). Both plans describe sequences of actions meant to be performed by organizational entities identified at enterprise level. Circumstances are then specified with regard to quality of service and technical requirements.

Mapping tests to knowledge architecture layers.

Test cases’ backbones are built from business scenarii fleshed out with instances mimicking identified entities from business environment, and hypothetical decisions taken by entitled users. The generation of actual instances and decisions could be automated depending on the thoroughness and consistency of business requirements.

To be actually tested, business scenarii have to be embedded into functional ones, yet the distinction must be maintained between what pertains to business logic and what pertains to the part played by supporting systems.

By contrast, despite being built from functional scenarii, integration and acceptance ones are meant to be blind to business or functional contents, and cases can therefore be generated independently.

Unit and components tests are the building blocks of all test cases, the former rooted in business requirements, the latter in functional ones. As such they can be used to a built-in integration of tests and development.

TDD in the loop

Whatever its merits for phased projects, the development V-model suffers from a structural bias because flaws rooted in requirements, arguably the most damaging, tend to be diagnosed after designs are encoded. Test driven development (TDD) takes a somewhat opposite approach as code specifications are governed by testability. But reversing priorities may also introduce reverse issues: where the V-model’s verification and validation come too late and too wide, TDD’s may come hasty and blinkered, with local issues masking global ones. Applying the OODA (Observation, Orientation, Decision, Action) loop to test cases offers a way out of the dilemma:

  • Observation (West): test and assessment for component, integration, and acceptance test cases.
  • Orientation (North): assessment in the broader context of requirements space (business, functional, Quality of Service), or in the local context of application (East).
  • Decision (East): confirm or adjust the development paths with regard to functional scenarii, development backlog, or integration constraints.
  • Action (South): develop code at unit, component, or process levels.
From V to O: How to iterate across layers

As each station is meant to deal with business, functional, and operational test cases, the challenge is to ensure a seamless integration and reuse across iterations and layers.

managing Tests cases

Whatever the method, tests plans are meant to mirror requirements scope and structure. For architecture oriented projects, tests should be directly aligned with the targeted capabilities of architecture layers:

For architecture oriented projects test plans should be set with regard to capabilities.

For business driven projects, test plans should be set along business scenarii, with development units and associated test cases defined with regard to activities. When use cases, which cover the subset of activities supported by systems, are introduced upfront for both business and functional requirements, test plans should keep the distinction between business and functional requirements.

Stories (bottom left), activities (right), use cases (top left).

All things considered, test cases are to be comprehensively and consistently run against requirements distinct in goals (business vs architecture), layers (business, functions, platforms), or formalism (text, stories, use cases, …).

In contrast, test cases are by nature homogeneous as made of instances of objects, events, and behaviors; ontologies can therefore be used to define and manage these instances directly from models. The example below make use of instances for types (propulsion, body), car model (Renault Clio), and car (58642).

Ontologies can then be used to manage test cases as instances of activities (development tests) or processes (integration and acceptance tests):

Test cases as instances of processes (integration) and activities (development)

The primary and direct benefit of representing test cases as instances in ontologies is to ensures a seamless integration and reuse of development, integration, and acceptance test cases independently of requirements context.

But the ontological approach have broader and deeper consequences: by defining test cases as instances in line with environment data, it opens the door to their enrichment through deep-learning.

knowledgeable test cases

Names may vary but tests are meant to serve a two-facet objective: on one hand to verify the intrinsic qualities of artefacts as they are, independently of context and usage; on the other hand to validate their features with regard to extrinsic circumstances, present or in a foreseeable future.

That duality has logical implications for test cases:

  • The verification of intrinsic properties can be circumscribed and therefore by carried out based on available information, e.g: design, programing language syntax and semantics, systems configurations, etc.
  • The validation of functional features and behaviors is by nature open-ended with regard to scope and time-frame; it ensues that test cases have to rely on incomplete or uncertain information.

Without practical applications that distinction has been of little consequence, until now: while the digital transformation removes the barriers between test cases and environment data, the spreading of machine learning technologies multiplies the possibilities of exchanges.

Along the traditional approach, test cases relies on three basic sources of information:

  • Syntax and semantics of programing languages are used to check software components (a)
  • Logical and functional models (including patterns) are used to check applications designs (b).
  • Requirements are used to check applications compliance (c).
Traditional (a,b,c) and knowledge driven (d,e,f) test cases

With barriers removed, test cases as instances can be directly aligned with environment data, opening doors to their enrichment, e.g:

  • Random data samples can be mined from environments and used to deal with human instinctive or erratic behaviors. By nature knee jerks or latent behavior cannot be checked with reasoned test cases, yet they neither occur in a void but within known operational of functional or circumstances; data analytics can be used to identify these quirks (d).
  • Systems being designed artifacts, components are meant to tally with models for structures as well as behaviors. Crossing operational data with design models will help to refine and hone integration and acceptance test cases (e).
  • Whereas integration tests put the focus on models and code, acceptance tests also involve the mapping of models to business and organizational concepts. As a corollary, test cases are to rely on a broader range of knowledge: external regulations, mined from environments, or embedded in organization through individual and collective skills (f).

Given the immersion of enterprises in digital environments, and assuming representing test cases as ontological instances, these are already practical opportunities. But the real benefits of knowledge based test cases are to come from leveraging machine learning technologies across enterprise and knowledge architectures.

FURTHER READING