How to present EA to Decision-makers

Preamble

To be brief and to the point a presentation of enterprise architecture at decision-makers is to follow three principles:

  1. Architecture is visual issue
  2. Schemas should contain between 8 and 12 unrelated concepts.
  3. Semantics should be non ambiguous and specific to the issue. 
(Bridget Riley)

To that end, the focus should present architecture as an activity and its outcome. Applied to enterprise, architecture has to deal with more than actual edifices and contexts, and encompasses business environments and objectives on one side, organization, processes, and assets on the other side. Moreover, enterprise architecture is a continuous and dynamic endeavor because change and exchanges are part and parcel of enterprises life-cycle:

  • At enterprise level, architectures are needed to map organizations and systems to changing environments.
  • At system level, architectures are needed to map changing organizations and processes to supporting systems.

Enterprise architecture can therefore be defined in terms of the maps and territories (outcome), and the management of changes across and between (activity).

Mapping Territories

Compared to its bricks and mortar counterpart, enterprises architecture is a work in progress to be carried out all along enterprises life-cycle; hence the need of maps to figure out their environments, organization, assets, and processes:

  • At enterprise level maps are meant to describe business environments on one hand, concepts, objectives, organization and processes on the other hand.
  • At operational level maps are to describe physical and digital environments on one hand, interfaces and platforms on the other hand.

Then, a third level is introduced in between to describe systems and ensure transparency and consistency with regard to territories.

Enterprise Maps & Territories

For all intents and purposes that ternary view of enterprise architectures holds true independently of labels (layers, levels, tiers, etc), or perspective (business, data, applications, technologies, etc). That genericity appears clearly when maps are aligned with traditional models hierarchy: conceptual, logical and functional, and physical.

managing changes

Taking a leaf from Stafford Beer’s view of enterprises as viable systems, their sustainability depends on a continuous and timely adaptation to their environment; that cannot be achieved without a dynamic alignment of maps and territories; hence the need for enterprise architectures to provide for:

  • Continuous and consistent attachments between identified elements in maps and territories.
  • Transparency and traceability of changes across maps and territories.

Regarding continuity, ill-famed Waterfall epitomizes the detached conception of systems engineering and its implicit assumption that requirements are enough to ensure the alignment of systems with their business environments. To be sure, the plights of Waterfall have already marked a significant rip in the detached paradigm, a rift partially patched by agile development models. But while agile, and more generally iterative schemes, are at their best for business driven application with well defined scope and ownership, they fall short for architecture oriented ones with overlapping footprint and distributed stakeholders; that’s when maps are required.

Regarding transparency, enterprise architecture misconceptions come from mirroring its physical cousin, using geometrical patterns without being specific about their constitutive elements, and consequently the dynamics of interactions.

As it happens, both flaws can be mended by generalizing to systems the well accepted view model of software architecture:

  • Logical view: design of software artifacts.
  • Process view: captures the concurrency and synchronization aspects.
  • Physical view: describes the mapping(s) of software artifacts onto hardware.
  • Development view: describes the static organization of software artifacts in development environments.

One that basis changes in systems architectures would be managed in four workshops, two focused on territories (enterprise and operations) and two on maps (domains and applications).

Systems Engineering Workshops

Defining engineering workflows in terms of maps and territories is to provide the foundations of model based systems engineering.

FURTHER READING

Declarative Frameworks & Emerging Architectures

Preamble

Ingrained habits die hard, especially mental ones as they are not weighted down by a mortal envelope. Fear is arguably a primary factor of persistence, if only because being able to repeat something proves that nothing bad has happened before.

Live, Die, Repeat (Philippe de Champaigne)

Procedures epitomize that human leaning as ordered sequences of predefined activities give confidence in proportion to generality. Compounding the deterministic delusion, procedures seem to suspend time, arguably a primary factor of human anxiety.

Procedures are Dead-ends

From hourglasses to T.S. Elliot’s handful, sand materializes human double bind with time, between will of measurement and fear of ephemerality.

Procedures seem to provide a way out of the dilemma by replacing time with prefabricated frames designed to ensure that things can only happen when required. But with extensive and ubiquitous digital technologies dissolving traditional boundaries, enterprises become directly exposed to competitive environments in continuous mutation; that makes deterministic schemes out of kilter:

  • There is no reason to assume the permanence of initial time-frames for the duration of planned procedures.
  • The blending of organizations with supporting systems means that architectural changes cannot be carried out top-down lest the whole be paralyzed by the management overheads induced by cross expectations and commitments.
  • Unfettered digital exchanges between enterprises and their environment, combined with ubiquitous smart bots in business processes, are to require a fine grained management of changes across artefacts.

These shifts call for a complete upturn of paradigm: event driven instead of scheduled, bottom-up instead of top-down, model based instead of activity driven.

Declarative frameworks: Non Deterministic, Model Based, Agile

The procedural/declarative distinction has its origin in the imperative/declarative programming one, the principle being to specify necessary and sufficient conditions instead of defining the sequence of operations, letting programs pick the best options depending on circumstances.

Applying the principle to enterprise architecture can help to get out of a basic conundrum, namely how to manage changes across supporting systems without putting a halt to enterprise activities.

Obviously, the preferred option is to circumscribe changes to well identified business needs, and carry on with the agile development model. But that’s not always possible as cross dependencies (business, organizational, or technical) may induce phasing constraints between engineering tasks.

As notoriously illustrated by Waterfall, procedural (if not bureaucratic) schemes have for long be seen as the only way to deal with phasing constraints; that’s not a necessity: with constraints and conditions defined on artifacts, developments can be governed by their status instead of having to be hard-wired into procedures. That’s precisely what model based development is meant to do.

And since iterative development models are by nature declarative, agile and model-based development schemes may be natural bedfellows.

Epigenetics & Emerging architectures

Given their their immersion in digital environments and the primacy of business intelligence, enterprises can be seen as living organisms using information to keep an edge in competitive environments. On that account homeostasis become a critical factor, to be supported by osmosis, architecture versatility and plasticity, and traditional strategic planning.

Set on a broader perspective, the merging of systems and knowledge architectures on one hand, the pervasive surge of machine learning technologies on the other hand, introduce a new dimension in the exchange of information between enterprises and their environment, making room for emerging architectures.

Using epigenetics as a metaphor of the mechanisms at hand, enterprises would be seen as organisms, systems as organs and cells, and models (including source) as genome coded with the DNA.

According to classical genetics, phenotypes (actual forms and capabilities of organisms) inherit through the copy of genotypes and changes between generations can only be carried out through changes in genotypes. Applied to systems, it would entail that changes would only happen intentionally, after being designed and programmed into the systems supporting enterprise organization and processes.

Enterprises epigenetics and emerging architectures

The Extended Evolutionary Synthesis considers the impact of non coded (aka epigenetic) factors  on the transmission of the genotype between generations. Applying the same principles to systems would introduce new mechanisms:

  • Enterprise organization and their use of supporting systems could be adjusted to changes in environments prior to changes in coded applications.
  • Enterprise architects could use data mining and deep-learning technologies to understand those changes and assess their impact on strategies.
  • Abstractions would be used to consolidate emerging designs with existing architectures.
  • Models would be transformed accordingly.

While applying the epigenetics metaphor to enterprise mutations has obvious limitations, it nonetheless puts a compelling light on two necessary conditions for emerging structures:

  • Non-deterministic mechanisms governing the way changes are activated.
  • A decrypting mechanism between implicit or latent contents (data from digital environments) to explicit ones (information systems).

The first condition is to be met with agile and model based engineering, the second one with deep-learning.

FURTHER READING

Semantic Interoperability: Stories & Cases

Preamble

For all intents and purposes, digital transformation has opened the door to syntactic interoperability… and thus raised the issue of the semantic one.

Cooked Semantics (Urs Fisher)

To put the issue in perspective, languages combine four levels of interpretation:

  • Syntax: how terms can be organized.
  • Lexical: meaning of terms independently of syntactic constructs.
  • Semantic: meaning of terms in syntactic constructs.
  • Pragmatic: semantics in context of use.
Languages levels of interpretation

At first, semantic networks (aka conceptual graphs) appear to provide the answer; but that’s assuming flat ontologies (aka thesaurus) within which all semantics are defined at the same level. That would go against the objective of bringing the semantics of business domains and systems architectures under a single conceptual roof. The problem and a solution can be expounded taking users stories and use cases for examples.

Crossing stories & cases

Beside the difference in perspectives, users stories and use cases stand at a methodological crossroad, the former focused on natural language, the latter on modeling. Using ontologies to ensure semantic interoperability is to enhance both traceability and transparency while making room for their combination if and when called for.

Set at the inception of software engineering processes, users’ stories and use cases mark an inflexion point between business requirements and supporting systems functionalities: where and when are determined (a) the nature of interfaces between business processes and systems components and, (b) how to proceed with development models, iterative or model based.

Users’ stories are part and parcel of Agile development model, their backbone, engine, and fuel. But as far as Agile is concerned, users’ stories introduce a dilemma: once being told stories are meant to be directly and iteratively put down in code; documenting them in words would bring back traditional requirements and phased development. Hence the benefits of sorting out and writing up the intrinsic elements of stories as to ensure the continuity and consistency of engineering processes, whether directly to code, or through the mediation of use cases.

To that end semantic interoperability would have to be achieved for actors, events, and activities.

Actors & Events

Whatever architectures or modeling methodologies, actors and events are sitting on systems’ fences, which calls for semantics common to enterprise organization and business processes on one side of the fence, supporting systems on the other side.

To begin with events, the distinction between external and internal ones is straightforward for use cases, because their purpose is precisely to describe the exchanges between systems and environments. Not so for users stories because at their stage the part to be played by supporting systems is still undecided, and by consequence the distinction between external and internal events.

With regard to actors, and to avoid any ambiguity, a semantic distinction could be maintained between roles, defined by organizations, and actors (UML parlance), for roles as enacted by agents interacting with systems. While roles and actors are meant to converge with analysis, understandings may initially differ across the fence between users stories and use cases, to be reconciled at the end of the day.

Representations should support the semantic distinctions as well as trace their convergence.

That would enable use cases and users stories to share overlapping yet consistent semantics for primary actors and external events:

  • Across stories: actors contributing to different stories affected by the same events.
  • Along processes: use cases set for actors and events defined in stories.
  • Across time-frames: actors and events first introduced by use cases before being refined by “pre-sequel” users stories.

Such ontology-based representations are to support full iterative as well as parallel developments independently of the type of methods, diagrams or documents used by projects.

activities

Users’ stories and use cases are set in different perspectives, business processes for the former, supporting systems for the latter. As already noted, their scopes overlap for events and actors which can be defined upfront providing a double distinction between roles (enterprise view) and actors (systems view), and between external and internal events.

Activities raise more difficulties because they are meant to be defined and refined across the whole of engineering processes:

  • From business operations as described by users to business functions as conceived by stakeholders.
  • From business logic as defined in business processes to their realization as defined in diagram sequences.
  • From functional requirements (e.g users authentication or authorization) to quality of service.
  • From primitives dealing with integrity constraints to business policies managed through rules engines.

To begin with, if activities have to be consistently defined for both users’ stories and use cases, their footprint should tally the description of actors and events stipulated above; taking a leaf from Aristotle rule of the three units, activity units should therefore:

  • Be triggered by a single event initiated by a single primary actor.
  • Be located into a single physical space with all resources at hand.
  • Timed by a single clock controlling accesses to all resources.

On that basis, the refinement of descriptions could go according to the nature of requirements: business (users’ stories), or functional and quality of service (use cases) .

Activities (execution units) should be tally with roles, events, and location.
Use cases wrap computation independent activities into transactions.

As far as ontologies are concerned, the objective is to ensure the continuity and consistency of representations independently of modeling tools and methodologies. For activities appearing in users stories and use cases, that would require:

  • The description of activities in relation with their business background, their execution in processes, and the corresponding functions already supported by systems.
  • The progressive refinement of roles (users, devices, other systems), location, and resources (objects or surrogates).
  • An unified definition of alternatives in stories (branches) and use cases (extension points)

The last point is of particular importance as it will determine how business and functional rules are to be defined and control implemented.

Knitting semantics: symbolic representations

The scope and complexity of semantic interoperability can be illustrated a contrario by a simple activity (checking out) described at different levels with different methods (process, use case, user story), possibly by different people at different time.

The Check-out activity is first introduced at business level (process), next a specific application is developed with agile (user story), and then extended for variants according to channels (use case).

Semantic interoperability between projects, domains, and methods.

Assuming unfettered naming (otherwise semantic interoperability would be a windfall), three parties can be mentioned under various monikers for renters, drivers, and customers.

In a flat semantic context renter could be defined as a subtype of customer, itself a subtype of party. But that option would contradict the neutrality objective as there is no reason to assume a modeling consensus across domains, methods, and time.

  • The ontological kernel defines parties and actors, as roles associated to agents (organization level).
  • Enterprises define customers as parties (business model).
  • Business unit can defines renters in reference to customers (business process) or directly as a subtype of role (user story).
  • The distinction between renters and drivers can be introduced upfront or with use cases’ actors.

That would ensure semantic interoperability across modeling paradigms and business domains, and along time and transformations.

Probing semantics: metonymies and metaphors

Once established in-depth foundations, and assuming built-in basic logic and lexical operators, semantic interoperability is to be carried out with two basic linguistic contraptions: metonymies and metaphors .

Metonymies and metaphors are linguistic constructs used to substitute a word (or a phrase) by another without altering its meaning, respectively through extensions and intensions, the former standing for the actual set of objects and behaviors, the latter for the set of features that characterize these instances.

Metonymy relies on contiguity to substitute target terms for source ones, contiguity being defined with regard to their respective extensions. For instance, given that US Presidents reside at the White House, Washington DC, each term can be used instead.

Metonymy use physical or functional proximity (full line) to match extensions (dashed line)

Metaphor uses similarity to substitute target terms for source ones, similarity being defined with regard to a shared subset of features, with leftovers taken out of the picture.

Metaphor uses similarity to match descriptions

Compared to basic thesaurus operators for synonymy, antonymy, and homonymy, which are set at lexical level, metonymy and metaphor operate at conceptual level, the former using set of instances (extensions) to probe semantics, the latter using descriptions (intensions).

Applied to users stories and use cases:

  • Metonymies: terms would be probed with regard to actual sets of objects, actors, events, and execution paths (data from operations) or mined from digital environments.
  • Metaphors: terms for stories, cases, actors, events, and activities would be probed with regard to the structure and behavior of associated descriptions (intensions).

Compared to the shallow one set at thesaurus level for terms, deep semantic interoperability encompasses all ontological dimensions, from actual instances to categories, aspects, and concepts. As such it can take full advantage of digital transformation and deep learning technologies.

further reading

EA Capacity & Maturity

Preamble

Appraising enterprises capability and maturity means navigating between the fuzzy depths of business models and the marked features of supporting systems.

Keeping in touch with environment (Urs Fischer)

Business capabilities are by nature a fleeting lot to assess, considering the innate diversity and volatility of circumstances and the fact that successes belong to exception more than rule. By contrast, the assessment of systems capabilities is much easier as they can be defined in architectural terms.

Digital transformation may help to solve the dilemma by dissolving it: given the merging of systems and organization, the key success factor for enterprises is their capacity to assess changes and opportunities and adjust their processes and architectures accordingly. On that account, performances are to depend on the dynamic alignment of enterprises representations (maps) with their business and technological environments (territories).

enterprise architecture & environments

Based on a broadly accepted architecture paradigm epitomised by the Zachman framework, and figured by the Pagoda blueprint, enterprise environments and architectures are to be defined along three levels (aka tiers or layers):

Enterprise must align their maps and territories
  • Data level, for digital environment, operations, and platforms; described by physical and analytical models.
  • Information level, for systems and engineering, described by logical and functional models.
  • Knowledge level, for business objectives and organization; described by conceptual and process models.

That framework can be used to assess enterprises capacity to change independently of business specificites.

DEALING WITH CHANGES

Whereas absolute measurements are tied to valuation contexts, relative ones can be ecumenical, hence the benefit of targeting the capacity to change instead of trying to measure capacity by itself.

As far as enterprise architectures are concerned, changes can originate from business or technological environments, the former at process level, the latter at application level.

To begin with, as much change as possible should be dealt with at application level through organizational, functional, or operational adjustments, without affecting architectural assets. That is to be achieved through architectures versatility and plasticity (aka agility) .

When architectural changes are needed, their footprint can be layered in terms of Model Driven Architecture (MDA) , i.e computation independent (CIM), platform independent (PIM), and platform specific (PSM) models.

Change in Enterprise Architectures

Ideally, changes should spread top-down from computation independent models to platform specific ones, with or without affecting platform independent ones. On that account, the footprint of changes rooted in technical environment should be circumscribed to platforms adaptations and charted by platform specific models.

By contrast, changes rooted in business environment could induce changes in any or all architecture layers:

  • Platform specific (PSM), e.g when business logic is implemented by rules engines.
  • Platform independent (PIM), e.g new business functions.
  • Computation independent (CIM), e.g new business processes.

That model-based approach is to be used to define enterprises changes in terms of entropy.

Entropy & capacity to change

Change is a matter of time, especially for business, and a delicate balance is to be achieved between assessments (which improve when given time until they become redundant), and commitments (which risk missing opportunities if kept waiting for too long).

The issues can be expressed by applying the OODA (Observation, Orientation, Decision, Action) loop to system engineering:

  • Changes in business and technology environments are observed at digital (e.g data mining) or conceptual level (e.g business intelligence) (a).
  • Assessment deals with the reliability of observations as well as their meaning with regard to enterprise objectives, organization, and systems (b).
  • Policies are updated and decisions made regarding the adjustment of objectives, resources, organization, or assets (c).
  • Decisions are implemented as technical and business commitments (d).

On that basis, the capacity to change is to depend on:

  • Osmosis: quality (accuracy, reliability,…), delay, and automaticity of observations with regard to changes in environments (data mining).
  • Operational traceability across decisions, actions and observations (process mining, verification).
  • Alignment of business and digital environments (validation).
  • Consistency of architecture models (CIMs, PIMs, PSMs).
  • Value chains and lean engineering: business logic (CIMs) directly embedded in software designs (PSMs).
Capacity to Change

With the benefits of digital transformation, these dimensions should be defined in terms of information processing.

osmosis & blind spots

The generalization of digital exchanges between enterprises and their environment brings back the concept of entropy, defined by cybernetics as the quantum of energy within a system that cannot be put to use. Applied to enterprises, entropy can be understood as a blind spot on environment data, arguably a critical hindrance to their capacity to move and adjust. The primary objective should therefore to minimize that blind spot, i.e to maximize the outcome of data processing .

As digital environments bring about level data playground, to get a competitive edge enterprises have to make a better sense of it; hence the importance of a distinction between data and information, the former obtained from the environment, the latter obtained through the processing of the former. On that basis, the capacity to change, defined as the opposite of entropy, is to be determined by the way data is processed into information.

Digital osmosis, i.e the exchange of digital data between enterprises operations and environments is clearly a primary factor: inbound streams can be mined as to provide comprehensive and timely snapshots, outbound streams can be weaved into business processes, enhancing the capacity to translate decisions into action.

Digital osmosis could also bolster lean engineering with the direct integration of (digital) business logic into software (e.g using rules engines), reinforcing the shortcuts between observations and decisions whenever orientation can be avoided. That would also enhance traceability between platform specific (PSMs) and operational models, as well as the monitoring of actions and the analysis of feedbacks (process mining).

Osmosis could be easily (if not accurately) estimated with the ratio between digitized flows and the whole of data flows, with measurements weighted by delays between observations and data processing.

A taxonomy of changes

Digital and business environments are not to be confused, and there is no reason to assume that their respective changes tally. As it happens, digital transformation may reinstall organizations at the nexus of changes by providing a powerful leverage on systems; and if entropy is considered, that is to be achieved through symbolic representations, aka models.

While names may vary, a distinction is generally made between changes according to their horizon, shorter for operational or tactical ones, longer for strategic ones. As so often with quantitative classifications, that understanding has been of limited use given the diversity of time-frames across industries. The digital transformation open the door to a qualitative approach, defining changes according to the nature of of their footprint:

  • From the business perspective, it should restate the primacy of organization for the harnessing of IT benefits.
  • From the architecture perspective, it would rank assets according to “digital modality”: symbolic (information, knowledge), tangible (e.g platforms), or a combination of both (functional architecture).
Changes can be defined according to the digital modality of their footprint

Taking the strategic perspective, changes in technical or economic factors are to affect assets and processes for a large and often undetermined number of production cycles; they must consequently be set in time-frames extending beyond managed horizons (actual chronologies depend on industries specificities). Despite regular updates, supporting models and hypothesis may lose their relevance during the intervals, with the resulting entropy hampering the assessment of opportunities and policies. Such discrepancies can be circumscribed if planned organization and computation independent models (CIMs) are systematically checked against observations.

Compared to strategic ones, functional changes can be aligned with a limited number of production cycles, which irons out most of the discrepancies between maps and territories associated with strategic planning. Instead, entropy could arise from a lack of transparency and traceability between the models used to map organization and processes (CIMs) to functional (PIMs) and technical (PSMs) architectures.

Finally, operational changes can be carried out at process level, without affecting architectures. In that case entropy could be the result of a misalignment of commitments and observations granularity.

On that basis, planning and managing changes in digital and business environments should be driven by models.

ENTROPY & maturity

As defined by cybernetics, entropy can be explained by the discrepancies between environment states (aka micro-states) and their representation (macro-states). Applied to enterprises environments and architectures, it would mean territories and operations for the former, maps and policies for the latter.

On that account the maturity of an organization should be assessed with regard to its ability to manage changes within and across environments:

  • Consistency of changes within environments: between territories and operations (digital environments), between maps and policies (business environments) .
  • Validity of changes across environments: between maps and territories, and between policies and operations.
Consistency is checked within environments, validity is checked across environments

Maturity levels could then be set according to the scope of managed entropy:

  1. Digital osmosis: all exchanges with environments come with digital counterpart sorted with regard to operational data, data attached to managed information, environment data detached from managed information.
  2. Digital value chains: digital integration of business and engineering processes ensuring transparency and traceability along value chains.
  3. Organizational pivot: value chains can be redefined as to make the best of information assets.
  4. Knowledge based architectures: enterprise layers (platforms, systems, organization) are aligned with data, information, and knowledge layers, leveraging the benefits of machine learning across enterprise organization and systems.

As it’s safe to assume that scaling maturity levels across enterprise architectures can only be carried out progressively, progresses have to be backed up by a comprehensive and ecumenical repository of physical and symbolic resources and assets, and be supported by workflows combining iterative (continuous and business driven) and model-based (phased, architecture driven) engineering processes .

FURTHER READING

Business Capabilities as Philosophers’ Stone ?

Preamble

To hear the buzz around business capabilities one would expect some consensus about basic principles as well as an established track record. Since there is little to be found on either account, should the notion be seen as a modern philosophers’ stone ?

People, System, Environment

Clear evidence can be found by asking two questions: what should be looked for, and why it can’t be found.

What is looked for

Business capabilities can be understood as a modern avatar of the medieval philosophers’ stone, a alchemical substance capable of turning base metals into gold.

In the context of corporate governance, that would mean a combination of assets blueprints and managing principles paving the way to success.

But such a quest is to err between the sands of businesses specificities and the clouds of accounting generalities.

At ground level enterprises have to mark their territory and keep it safe from competitors. Whatever the means they use (niche segments, effective organization, monopolistic situations, …,) success comes from making a difference.

With hindsight, revealing singularities may be discovered among the idiosyncrasies of business successes, but that can only be done from accounting heights, where capabilities are clouded by numbers.

why it can’t be found

The fallacy of a notion can also be established a contrario if, assuming the existence of a philosophers’ stone, the same logic would also demonstrate the futility of the quest.

Such a reasoning appears more like a truism when applied to business capabilities: insofar as business competition is concerned success is exclusive and cutting edges are not shared. It ensues that assuming business capabilities could be found, they would by the same move become obsolete and be instantly dissolved.

What should be looked for

As far as business environments are concerned, success is by nature singular and transient, and it consequently depends on sustaining a balancing act between assets and opportunities; that’s what a business model is meant to achieve.

Taking a leaf from Alexander Osterwalder and Yves Pigneur handbook, nine aspects should be considered:

  1. Customer Segments: categories of people or organizations an enterprise aims to serve.
  2. Value proposition: bundles of products and services meant to create value to customer segments.
  3. Channels: how value propositions are to be delivered to customer segments.
  4. Customer Relationships: policies to be carried out according to value propositions and customer segments.
  5. Revenue Streams: cash generated by customer segment and value propositions.
  6. Key Resources.
  7. Key Activities.
  8. Key Partnerships.
  9. Cost Structure.

The five first objectives are to mapped to enterprise architecture capabilities, and feasibility assessed (points 6 to 9)

fURTHER READING

Focus: Entropy & Homeostasis

Preamble

As defined by thermodynamics entropy is a measure of the energy within a system that cannot be harnessed to useful use; cybernetics has took over, making entropy a pillar of information theory.

Figuring Digital Matter (Marcelo Cidade)

Notwithstanding the focus put on viable systems and organizations (as epitomized by the pioneering work of Stafford Beer), cybernetics’ actual imprint on corporate governance has been frustrated by the correspondence assumed between information and energy. But the immersion of enterprises into digital environments brings entropy back in front, along with a paradigmatic shift out of thermodynamics.

domain: Physics vs Economics

The second law of thermodynamics states that entropy within a system is constant, and so is information as defined by cybernetics. But economics laws, if there is such a thing, are to differ: as far as business is concerned information is not to be found in commons but comes from the processing of raw data.

As a matter of fact, thermodynamic stability is meaningless in economics because business information is not a given and uniform quantity but a fluctuating and polymorph one. It ensues that for enterprises in competitive environments measures of entropy set in isolation are pointless: taking a leaf from Lewis Carroll’s Red Queen, the entropy of any given enterprise can only be assessed in relation with its competitors.

Taking a general perspective, entropy is meant to be observed either between the two ends of communication channels, or at the boundaries of complex systems (animals, machines, or organizations). While the former can be seen as a formal coding issue, the latter is of particular relevance for enterprises immersed in digital environments.

Departing from thermodynamics view of entropy measuring a system’s thermal energy unavailable for conversion into mechanical work, economics will consider unexplained information, i.e which cannot be acted upon.

But leaving physics also means forsaking the comfort of its metrics. That’s not a problem for communication entropy, which can be dealt with as a coding issue; but the difficulty appears when counts of digits have to be replaced by measures of knowledge.

complexity: microstates vs macrostates

When there are no clear and reliable metrics to put light on it, entropy becomes a black hole defined by disorder of randomness, the former in reference to the order borne out by classifications, the latter in reference to the predictability borne out by probabilities. Since both approaches deal with the relationship between configurations and information, the issue can be summarized with:

  • A quantum of hypothetical information (dashed line).
  • The complexity of identified items (aka microstates).
  • The complexity of representations (aka macrostates).
  • The quantum of information accounted for, ordered or predicted.
How to assess what is missed ?

As stated by the basic law of thermodynamics, to be of any energetic use the internal heat of a system has to be lower than the external one. Generalized in terms of complexity, it means that the quantum of information accounted for will first increase with the number of configurations considered, reach a maximum, and then decrease until the complexity of configurations (aka internal heat) equates the complexity of the target (aka external heat).

Translated to economics, that scheme must be restated in terms of digital environments and business information.

Entropy: From data to information

The generalization of digital exchanges between enterprises and their environment points to the necessary distinction between data (exchanged flows) and information (managed assets). What happens in-between can be defined along two perspectives:

  • Representation: whether the exchanges are carried out at digital level (data observations of microstates) or in association with symbolic representations (models of macrostates) of managed information.
  • Coupling: whether the processing of data is tied to operations or carried out independently.
Entropy Layers

The issue of entropy can be restated accordingly:

  1. The direct consequence of the digital transformation is to increase osmosis between enterprises and their environment, with data flows bypassing traditional boundaries and feeding directly operational processes (bottom right).
  2. As a result, data analytics can be integrated with business processes, improving operational decision-making. In terms of entropy the outcome would be a more effective use of information models (bottom left).
  3. Next, information models (aka symbolic representations) can themselves be changed as to improve their effectiveness, i.e reduce entropy at process as well as architecture level (top left).
  4. Last but not least, deep learning could be applied to digitalized contexts (aka microstates) in order to derive new symbolic representations (aka macrostates) (top right).

On that basis entropy policies should be set at operational and strategic levels, the former dealing with processes, the latter with architectures.

Homeostasis: from Data to knowledge

As far as enterprises are concerned, homeostasis means the adaptation of organizations and systems to changes in competitive environments. To that end, policies set along the entropy paradigm could draw on improving the perception of microstates (data level) as well as the representation of macrostates (information level).

At data level, the perception of changes depends on the osmosis between systems and environments. To that end data mining is to be used to hone the sampling of populations and refine microstates.

At symbolic level (representations), the economics perspective differs from physics on two critical aspects:

  • Contrary to heat, economic information is not a constant, which means that macrostates are to be continuously updated.
  • Moreover, the economic playground is not homogenous but combines physical (e.g demographics or climate), socio-economic (e.g income or education) and symbolic (e.g politics or culture) microstates, which means that macrostates are to be differentiated.
Improving entropy

With or without differentiated macrostates, policies are to rely on two categories of models:

  • Extensional ones target environments, dealing with the analysis of business context, operations, and changes (descriptive models), as well as what should be expected (predictive models).
  • Intensional ones deal with enterprise architectures, current or planned, at processes as well as architecture level (prescriptive models).

As figured above, representations and policies are to be combined, hence the need to anchor them to enterprise architectures, as can be illustrated with the Pagoda blueprint:

Architectures & Homeostasis (crosses and numbers refer to the table above)
  • Populations of instances (microstates) are obtained from digital environments as well as from processes execution (1). Data analytics are used to improve descriptive and predictive models (operational macrostates), and consequently the osmosis between processes and environment (2).
  • Architecture improvements can be achieved through changes in prescriptive models (structural macrostates). Compared to strategic planning carried out top-down from business models and requirements, homeostasis also relies on bottom-up forces which can combine with business models and coalesce into emerging architectures (3).
  • Business and organization models are arguably a primary factor in sorting out data, respectively from environments and systems; and being by nature essentially symbolic, they are more pliable than systems models (0). Homeostasis could therefore bypass architectural macrostates, with models of business and organization emerging directly from digital environments through data mining and deep learning (4).

Such alignment of enterprise architectures and symbolic representations is to greatly enhance the transparency and traceability of digital transformations by organising changes and policies with regard to their nature (physical, functional, organizational, or business) and decision-making horizon (operational or strategic).

FURTHER READING

Squared Outline: Languages

As a capability of live organisms, languages are best understood in terms of communication.

That understanding is of particular interest for enterprises immersed in digital environments inhabited by hybrids with deep learning capabilities.

  1. Languages begin with the need of direct (here) and immediate (now) communication. While there is no time for explanations, messages must convey some meaning, if only to distinguish friends from foes. Hence the use of signs pointing to categories of objects or phenomena. That’s the language lexical layer linking instantly observations (data) to information (bottom right).
  2. Rules governing the combination of signs follow soon because more has to be communicated about circumstances and what is to be done with. That’s the language syntactic layer linking observations (data) to current information (top right).
  3. The breakthrough comes with symbolic representation: once
    disentangled from immediate circumstances, communications can encompass whatever is deemed relevant in contexts and concerns;
    That’s the language semantic layer that weave together information and knowledge (top left).
  4. The cognitive ability to “manipulate” symbolic representations (aka models) independently of circumstances opens the door to any kind of constructions. That’s the language pragmatic layer meant to put knowledge to actual use (bottom left).

That functional taxonomy can be usefully applied to the digital transformation of enterprise architectures, the first layer aligned with data, the second and third with information, and the fourth with knowledge.

FURTHER READING