As far as enterprise architecture is concerned, the issue of scale is fogged by two confusions: one between processes and structures, the other between space and time. That square is at the core of the discipline.
Scaling Time (Tycho Brahe)
The Matter of Time
Even before the digital unfolding of environments, everybody was to agree that business is all about timing; and yet, that critical dimension remains a side issue of most enterprise architecture frameworks, which consequently fail to deal with enterprises ability to change and adapt in competitive environments.
With regard to time, the business perspective is said to be synchronic because it must continuously tally with environments constraints, opportunities, and risks.
By contrast, the engineering perspective is said to be diachronic because once fastened to requirements, developments are supposed to proceed according their own time-span.
Mixing Timescales
For enterprise architects, pairing up business and engineering momentum may look like a Fourier transform that would decompose enterprise architecture into piecemeal capabilities to be adjusted to the flow of business circumstances. But assets being by nature discrete, changes are not easily ironed out and some mechanism is necessary to align business and engineering time-frames, the former set at enterprise level and used to align enterprise architecture capabilities with business objectives, the latter set at system level and used to manage developments.
Agile methodologies solve the problem by assuming continuous deliveries disconnected from external schedules and by folding projects into detached time warps. Along with debatable scaling attempts, definitively non agile procedures are used to carry on with agile projects at system level.
As it happens, the iterative model can be upgraded to architecture level, enabling the linking of business driven changes to systems based ones without breaking agile principles:
Projects’ scope, objectives, and invariants are set with regard to enterprise architecture capabilities.
Iterations combine requirements analysis, development, and acceptance.
Increments and deliverables are defined dynamically contingent on scope and invariants.
Exit conditions (aka deliveries) are defined with regard to quality of services and technical requirements.
So-called architecture backlogs could thus be added to coordinate self-contained developments, standalone applications as well as system business functions, e.g. (invariants are in grey):
But the coordination issue remains between architecture backlogs, and adding procedures or committees shouldn’t be an option as it would seriously curb enterprise agility. By contrast, model based solutions are to ensure a constant and consistent adaptation of enterprise architectures to their environment.
To become a discipline enterprise architecture has to contrive some shared modeling paradigm; that can be done from established ones like layers, MVC (Model-View-Controller), MDA (Model Driven Architecture), or the 4+1 views.
A Portable & Inter-operable Roof (Jonas Bendiksen)
Architecture Layers
To begin with, the basic layers of process, data, application, and technology layers can be rearranged as to make data orthogonal to other layers.
Then, Model Driven architecture (MDA) can be used as representative of model based systems engineering (MBSE) approaches:
Computation independent models (CIMs) describe organization and business processes independently of the role played by supporting systems.
Platform independent models (PIMs) describe the functionalities supported by systems independently of their implementation.
Platform specific models (PSMs) describe systems components depending on implementation platforms.
With regard to data bases modeling, there is a large consensus for the conceptual, logical, and physical distinction:
Conceptual models describe enterprises organization and business independently of supporting systems.
Logical models describe the symbolic objects managed by supporting systems as surrogates of business objects and activities.
Physical (nowadays digital) models describe the actual implementation of symbolic surrogates as binary objects.
As for systems functional modeling, the Model-View-Controller (MVC) is arguably the leading pattern whatever the guises:
Model: shared and a life-cycle independent of business processes. The continuity and consistency of business objects representation must be guaranteed independently of the applications using them.
View: what is not shared with a life-cycle set by user session. The continuity and consistency of representations is managed locally (interactions with external agents or devices independently of targeted applications.
Controller: shared with a life-cycle set by a business process. The continuity and consistency of representations is managed independently of the persistency of business objects and interactions with external agents or devices.
Services can be introduced to represent functions to be shared with no life-cycle.
Architecture engineering
Finally, the relationships between architecture blueprints and systems engineering can be derived from Philippe Kruchten’s “4+1”: View Model of Software Architecture”:
Logical view: design of software artifacts.
Process view: execution of activities.
Deployment (aka physical) view: mapping of software components across physical environments (platforms).
Development (aka implementation) view: organization of software artifacts in development environments.
A fifth view being added for use cases describing interactions between systems and environments.
At first, views at enterprise architecture level appear to be congruent with these ones, e.g: logical for data, process for applications, physical for technology. But labels may be misguiding when applied indifferently to software architecture (views) and enterprise architecture (layers): contrary to views, which are solely defined by concerns independently of targeted structures, layers reflect definitive assumptions about architectures. That confusion between views and layers, illustrated by a miscellany of tabular and pyramidal representations, would be of little consequence were the modeling of changes not a critical issue; not so for enterprise architecture.
As it happens the confusion can be worked out if views are redefined solely and comprehensibly in terms of engineering:
Domains modeling: conceptual, logical, and physical.
Applications development: business logic, systems functions, programs.
Systems deployment: locations, processes, configurations.
Enterprise organisation and operations.
That make views congruent with engineering workshops, enabling a seamless integration of enterprise architecture blueprints with evolutionary processes.
For all intents and purposes, digital transformation has opened the door to syntactic interoperability… and thus raised the issue of the semantic one.
Cooked Semantics (Urs Fisher)
To put the issue in perspective, languages combine four levels of interpretation:
Syntax: how terms can be organized.
Lexical: meaning of terms independently of syntactic constructs.
Semantic: meaning of terms in syntactic constructs.
Pragmatic: semantics in context of use.
Languages levels of interpretation
At first, semantic networks (aka conceptual graphs) appear to provide the answer; but that’s assuming flat ontologies (aka thesaurus) within which all semantics are defined at the same level. That would go against the objective of bringing the semantics of business domains and systems architectures under a single conceptual roof. The problem and a solution can be expounded taking users stories and use cases for examples.
Crossing stories & cases
Beside the difference in perspectives, users stories and use cases stand at a methodological crossroad, the former focused on natural language, the latter on modeling. Using ontologies to ensure semantic interoperability is to enhance both traceability and transparency while making room for their combination if and when called for.
Users’ stories are part and parcel of Agile development model, their backbone, engine, and fuel. But as far as Agile is concerned, users’ stories introduce a dilemma: once being told stories are meant to be directly and iteratively put down in code; documenting them in words would bring back traditional requirements and phased development. Hence the benefits of sorting out and writing up the intrinsic elements of stories as to ensure the continuity and consistency of engineering processes, whether directly to code, or through the mediation of use cases.
To that end semantic interoperability would have to be achieved for actors, events, and activities.
Actors & Events
Whatever architectures or modeling methodologies, actors and events are sitting on systems’ fences, which calls for semantics common to enterprise organization and business processes on one side of the fence, supporting systems on the other side.
To begin with events, the distinction between external and internal ones is straightforward for use cases, because their purpose is precisely to describe the exchanges between systems and environments. Not so for users stories because at their stage the part to be played by supporting systems is still undecided, and by consequence the distinction between external and internal events.
With regard to actors, and to avoid any ambiguity, a semantic distinction could be maintained between roles, defined by organizations, and actors (UML parlance), for roles as enacted by agents interacting with systems. While roles and actors are meant to converge with analysis, understandings may initially differ across the fence between users stories and use cases, to be reconciled at the end of the day.
Representations should support the semantic distinctions as well as trace their convergence.
That would enable use cases and users stories to share overlapping yet consistent semantics for primary actors and external events:
Across stories: actors contributing to different stories affected by the same events.
Along processes: use cases set for actors and events defined in stories.
Across time-frames: actors and events first introduced by use cases before being refined by “pre-sequel” users stories.
Such ontology-based representations are to support full iterative as well as parallel developments independently of the type of methods, diagrams or documents used by projects.
activities
Users’ stories and use cases are set in different perspectives, business processes for the former, supporting systems for the latter. As already noted, their scopes overlap for events and actors which can be defined upfront providing a double distinction between roles (enterprise view) and actors (systems view), and between external and internal events.
Activities raise more difficulties because they are meant to be defined and refined across the whole of engineering processes:
From business operations as described by users to business functions as conceived by stakeholders.
From business logic as defined in business processes to their realization as defined in diagram sequences.
From functional requirements (e.g users authentication or authorization) to quality of service.
From primitives dealing with integrity constraints to business policies managed through rules engines.
To begin with, if activities have to be consistently defined for both users’ stories and use cases, their footprint should tally the description of actors and events stipulated above; taking a leaf from Aristotle rule of the three units, activity units should therefore:
Be triggered by a single event initiated by a single primary actor.
Be located into a single physical space with all resources at hand.
Timed by a single clock controlling accesses to all resources.
On that basis, the refinement of descriptions could go according to the nature of requirements: business (users’ stories), or functional and quality of service (use cases) .
Activities (execution units) should be tally with roles, events, and location.
Use cases wrap computation independent activities into transactions.
As far as ontologies are concerned, the objective is to ensure the continuity and consistency of representations independently of modeling tools and methodologies. For activities appearing in users stories and use cases, that would require:
The description of activities in relation with their business background, their execution in processes, and the corresponding functions already supported by systems.
The progressive refinement of roles (users, devices, other systems), location, and resources (objects or surrogates).
An unified definition of alternatives in stories (branches) and use cases (extension points)
The last point is of particular importance as it will determine how business and functional rules are to be defined and control implemented.
Knitting semantics: symbolic representations
The scope and complexity of semantic interoperability can be illustrated a contrario by a simple activity (checking out) described at different levels with different methods (process, use case, user story), possibly by different people at different time.
The Check-out activity is first introduced at business level (process), next a specific application is developed with agile (user story), and then extended for variants according to channels (use case).
Semantic interoperability between projects, domains, and methods.
Assuming unfettered naming (otherwise semantic interoperability would be a windfall), three parties can be mentioned under various monikers for renters, drivers, and customers.
In a flat semantic context renter could be defined as a subtype of customer, itself a subtype of party. But that option would contradict the neutrality objective as there is no reason to assume a modeling consensus across domains, methods, and time.
The ontological kernel defines parties and actors, as roles associated to agents (organization level).
Enterprises define customers as parties (business model).
Business unit can defines renters in reference to customers (business process) or directly as a subtype of role (user story).
The distinction between renters and drivers can be introduced upfront or with use cases’ actors.
That would ensure semantic interoperability across modeling paradigms and business domains, and along time and transformations.
Probing semantics: metonymies and metaphors
Once established in-depth foundations, and assuming built-in basic logic and lexical operators, semantic interoperability is to be carried out with two basic linguistic contraptions: metonymies and metaphors .
Metonymies and metaphors are linguistic constructs used to substitute a word (or a phrase) by another without altering its meaning, respectively through extensions and intensions, the former standing for the actual set of objects and behaviors, the latter for the set of features that characterize these instances.
Metonymy relies on contiguity to substitute target terms for source ones, contiguity being defined with regard to their respective extensions. For instance, given that US Presidents reside at the White House, Washington DC, each term can be used instead.
Metonymy use physical or functional proximity (full line) to match extensions (dashed line)
Metaphor uses similarity to substitute target terms for source ones, similarity being defined with regard to a shared subset of features, with leftovers taken out of the picture.
Metaphor uses similarity to match descriptions
Compared to basic thesaurus operators for synonymy, antonymy, and homonymy, which are set at lexical level, metonymy and metaphor operate at conceptual level, the former using set of instances (extensions) to probe semantics, the latter using descriptions (intensions).
Applied to users stories and use cases:
Metonymies: terms would be probed with regard to actual sets of objects, actors, events, and execution paths (data from operations) or mined from digital environments.
Metaphors: terms for stories, cases, actors, events, and activities would be probed with regard to the structure and behavior of associated descriptions (intensions).
Compared to the shallow one set at thesaurus level for terms, deep semantic interoperability encompasses all ontological dimensions, from actual instances to categories, aspects, and concepts. As such it can take full advantage of digital transformation and deep learning technologies.
Distinctions must serve a purpose and be assessed accordingly. On that account, what would be the point of setting apart data and information, and on what basis could that be done.
From Data Stripes to Information Structure (Victor Vasarely)
Until recently the two terms seem to have been used indifferently; until, that is, the digital revolution. But the generalization of digital surroundings and the tumbling down of traditional barriers surrounding enterprises have upturned the playground as well as the rules of the game.
Previously, with data analytics, information modeling, and knowledge management mostly carried out as separate threads, there wasn’t much concerns about semantic overlaps; no more. Lest they fall behind, enterprises have to combine observation (data), reasoning (information), and judgment (knowledge) as a continuous process. But such integration implies in return more transparency and traceability with regard to resources (e.g external or internal) and objectives (e.g operational or strategic); that’s when a distinction between data and information becomes necessary.
Economics: Resources vs Assets
Understood as a whole or separately, there is little doubt that data and information have become a key success factor, calling for more selective and effective management schemes.
Being immersed in digital environments, enterprises first depend on accurate, reliable, and timely observations of their business surroundings. But in the new digital world the flows of data are so massive and so transient that mining meaningful and reliable pieces is by itself a decisive success factor. Next, assuming data flows duly processed, part of the outcome has to be consolidated into models, to be managed on a persistent basis (e.g customer records or banking transactions), the rest being put on temporary shelves for customary uses, or immediately thrown away (e.g personal data subject to privacy regulations). Such a transition constitutes a pivotal inflexion point for systems architectures and governance as it sorts out data resources with limited lifespan from information assets with strategic relevance. Not to mention the sensibility of regulatory compliance to data management.
Processes: Operations vs Intelligence
Making sense of data is pointless without putting the resulting information to use, which in digital environments implies a tight integration of data and information processing. Yet, as already noted, tighter integration of processes calls for greater traceability and transparency, in particular with regard to the origin and scope: external (enterprise business and organization) or internal (systems). The purposes of data and information processing can be squared accordingly:
The top left corner is where business models and strategies are meant to be defined.
The top right corner corresponds to traditional data or information models derived from business objectives, organization, and requirement analysis.
The bottom line correspond to analytic models for business (left) and operations (right).
Squaring the purposes of Data & Information Processing
That view illustrates the shift of paradigm induced by the digital transformation. Prior, most mappings would be set along straight lines:
Horizontally (same nature), e.g requirement analysis (a) or configuration management (b). With source and destination at the same level, the terms employed (data or information) have no practical consequence.
Vertically (same scope), e.g systems logical to physical models (c) or business intelligence (d). With source and destination set in the same semantic context the distinction (data or information) can be ignored.
The digital transformation makes room for diagonal transitions set across heterogeneous targets, e.g mapping data analytics with conceptual or logical models (e).
That double mix of levels and scopes constitutes the nexus of decision-making processes; their transparency is contingent on a conceptual distinction between data and information.
Data is used to align operations (systems) with observations (territories).
Information is used to align categories (maps) with objectives.
Roles of Data (red) & Information (blue) in integrated decision-making processes
Then, the conceptual distinction between data and information is instrumental for the integration of operational and strategic decision-making processes:
Data analytics feeding business intelligence
Information modeling supporting operational assessment.
Not by chance, these distinctions can be aligned with architecture layers.
Architectures: Instances vs Categories
Blending data with information overlooks a difference of nature, the former being associated with actual instances (external observation or systems operations), the latter with symbolic descriptions (categories or types). That intrinsic difference can be aligned with architecture layers (resources are consumed, assets are managed), and decision-making processes (operations deal with instances, strategies with categories).
With regard to architectures, the relationship between instances (data) and categories (information) can be neatly aligned with capability layers, as represented by the Pagoda blueprint:
The platform layer deals with data reflecting observations (external facts) and actions (system operations).
The functional layer deals with information, i.e the symbolic representation of business and organization categories.
The business and organization layer defines the business and organization categories.
It must also be noted that setting apart what pertains to individual data independently of the information managed by systems clearly props up compliance with privacy regulations.
Architectures & Decision-making
With regard to decision-making processes, business intelligence uses the distinction to integrate levels, from operations to strategic planning, the former dealing with observations and operations (data), the latter with concepts and categories (information and knowledge).
Representations: Knowledge Architecture
As noted above, the distinction between data and information is a necessary counterpart of the integration of operational and intelligence processes; that implies in return to bring data, information, and knowledge under a common conceptual roof, respectively as resources, assets, and service:
Resources: data is captured through continuous and heterogeneous flows from a wide range of sources.
Assets: information is built by adding identity, structure, and semantics to data.
Services: knowledge is information put to use through decision-making.
Ontologies, which are meant to encompass all and every kind of knowledge, are ideally suited for the management of whatever pertains to enterprise architecture, thesaurus, models, heuristics, etc.
That approach has been tested with the Caminao ontological kernel using OWL2; a beta version is available for comments on the Stanford/Protégé portal with the link: Caminao Ontological Kernel (CaKe 2.0).
Conclusion: From Metadata to Machine Learning
The significance of the distinction between data and information shows up at the two ends of the spectrum:
On one hand, it straightens the meaning of metadata, to be understood as attributes of observations independently of semantics, a dimension that plays a critical role in machine learning.
On the other hand, enshrining the distinction between what can be known of individuals facts or phenomena and what can be abstracted into categories is to enable an open and dynamic knowledge management, also a critical requisite for machine learning.
As every artifact, models can be defined by nature and function. With regard to nature, models are symbolic representations, descriptive (categories of actual instances) or prescriptive (blueprints of artifacts). With regard to function, models can be likened to currency, as they serve as means of exchange, instruments of measure, or repository.
Along that understanding, models can be neatly characterized by their intent:
No use of models, direct exchange (barter) can be achieved between business analysts and software engineers.
Models are needed as medium supporting exchange between organizational units with different business or technical concerns.
Models are used to assess contents with regard to size, complexity, quality, …
Models are kept and maintained for subsequent use or reuse.
Depending on organizations, providers and customers could then be identified, as well as modeling languages.
To take advantage of their immersion into digital environments enterprises have to differentiate between data (environment’s facts), information (systems’ representations), and knowledge (enterprise behavior).
Outside / Insight (Anna Hulacova)
That cannot be achieved without ironing out the semantic discrepancies between corresponding representations.
Symbolic Representations
Along with the Symbolic System modeling paradigm, the aim of computer systems is to manage the symbolic representations of business objects and processes pertaining to enterprises contexts and concerns. That view can be summarized in terms of maps and territories:
Maps and territories of systems and their environment
Behind the various labels and modus operandi, maps can be defined on three basic layers:
Conceptual models, targeting enterprises organization and business independently of supporting systems.
Logical models, targeting the symbolic objects managed by supporting systems as surrogates of business objects and activities.
Physical models, targeting the actual implementation of symbolic surrogates as binary objects.
The Pagoda Architecture Blueprint is derived from the Zachman’s framework
These maps can be aligned with commonly agreed enterprise architecture layers, respectively for organizations and processes, systems functionalities, and platforms, with a fourth added for analytical models of business environments.
Conceptual Debt
Ideally, that alignment should pave the way to the integration of systems and knowledge architectures, as represented by the Pagoda blueprint:
Insofar as systems engineering is concerned, that would require two kinds of transformations: from conceptual to logical models (aka analysis), and from logical to physical models (aka design).
While the latter is just a matter of expertise (thank to the GoF), that’s not the case for the former which has to deal with a semantic gap between descriptions of specific and changing business domains and organizations on one side, generic and stable systems architectures on the other side.
As a result, what can be termed a conceptual debt has accumulated with the the number of logical models supporting physical ones without the backing of relevant ones for business or organization. The objective is therefore to bring all models into a broader knowledge architecture.
Models & Ontologies
As introduced by Greek philosophers, ontologies are systematic accounts of whatever is known about a domain of concern. From that point, three basic observations can be made:
Ontologies are made of categories of things, beings, or phenomena; as such they may range from lexicon or simple catalogs to philosophical doctrines.
Ontologies are driven by cognitive (i.e non empirical) purposes, namely the validity and consistency of symbolic representations.
Ontologies are meant to be directed at specific domains of concerns, whatever their epistemic nature: engineering, business, politics, religions, mythologies, astrology, etc.
With regard to models, only the second observation puts ontologies apart: compared to models, ontologies are about understanding and are not necessarily driven by empirical purposes.
On that account ontologies appear as an option of choice for the integration of symbolic representations:
Data: instances identified at territory level, associated with terms or labels; they are mapped to business intelligence (environments) and operational (systems) models.
Information: categories associated with sets of instances; categories can be used for requirements analysis or software design.
Knowledge: ideas or concepts connect changing and overlapping sets of terms and categories; documents can be associated to any kind of item.
With models consistently mapped to ontologies, the conceptual debt could be restructured in the broader context of enterprise knowledge architecture.
Ontologies & Knowledge
As expounded by Davis, Shrobe, and Szolovits in their pivotal article, knowledge is made of five constituents:
Surrogates, used as symbolic counterparts of actual objects and phenomena.
Ontological commitments defining the categories of things that may exist in the domain under consideration.
Fragmentary theory of intelligent reasoning defining what things can do or can be done with.
Medium making knowledge understandable by computers.
Medium making knowledge understandable by humans.
Points 1 and 5 are not concerned by the conceptual gap, the former being dealt with through the anchoring of identified individuals to surrogates (see below), and the latter being with human interfaces. That leaves points 2-4 as the conceptual hub where information models have to be integrated into knowledge architecture.
Assuming RDF (Resource Description Framework) graphs are used for knowledge representation (point 4), and taking a restaurant for example, the contents of information models (point 2) will be denoted by:
Primary nodes (rectangles), for elements specific to cooking and customers relationship management, to be decorated with features (bottom right).
Connection nodes (circles and arrows), for semantically neutral (aka syntactic) associations to be uniformly implemented across domains, e.g with predicate calculus (bottom left).
Semantic connectors supporting both syntactic and semantic associations (bottom, middle).
Inserting information into knowledge architecture
Using ontologies to integrate models into knowledge architecture is to enable the restructuring of the conceptual debt.
Minding Semantic Gaps
Keeping with the financial metaphor, conceptual debts can be expressed in terms of spreads between models, and as such could be restructured through models transformation.
To begin with, all representations have to be anchored to environments through identified (#) instances.
Anchoring systems to their environment
Then, instances are to be associated to categories according to features (properties or relationships) :
Customers, reservations, tables, and waiters are identified individuals managed through symbolic surrogates.
Names of dishes and ingredients do not refer to symbolic surrogates representing business objects, but are just labels pointing to recipes (documents).
Idem for the names of wines, except for exceptional vintages with identified bottles to be managed through symbolic surrogates.
Aspects are structured sets of features meant to be valued through category instances.
Documents are contents to be accessed directly or through networks, (e.g preparations or wine reviews).
Fleshing out model backbone with features, relationships, and documents (black, italic)
It must be noted that the distinction between neutral and specific contents is not meant to be universal but be justified by pragmatic concerns, for instance:
Addresses are not defined as aspects but as category instances so that surrogates of actual addresses can be used to optimize deliveries.
Links to customers and addresses, being self-explanatory, can be defined as non specific.
The relationship from dishes to ingredients is structured and specific.
Sorting out truth-preserving constructs from domain specific ones is a key success factor for models transformation, and consequently debt restructuring.
Restructuring The Debts
Restructuring financial debts means redefining assets and incomes; with regard to systems it would mean reassessing architectures with regard to value chains.
To begin with, the Pagoda blueprint central pillar is to support the integration of systems and knowledge architectures and consequently the dynamic alignment of systems capabilities, meant to be stable and shared, with business opportunities, by nature changing and specific.
Then, the pairing of systems and knowledge architectures, like a DNA double helix, is to be used to restructure both technical and conceptual debts.
Pairing assets and incomes across architectures
With regard to technical debts, restructuring isn’t to present significant difficulties:
Pairing income flows (applications) to tangible assets (platforms) can be done at data level.
Model transformations between data (code) and information (models) levels can be achieved using homogeneous domain specific and programming languages.
Things are more complex with conceptual debts, for pairing as well as transformations:
There is no direct pairing because value chains (processes) are set across assets (organization).
Model transformations are to bridge the semantic gap between the symbolic representations of environments (knowledge) and systems (information) .
Nonetheless, these difficulties can be overcame combining integrated architectures and ontologies.
Regarding the structure of the conceptual debt, the income part is to be defined through business objectives (customers, products, channels, supply chain, etc.), and assets to be defined by corresponding enterprise architectures capabilities.
How to mind the gap between external and systems representations.
Regarding models transformations, ontologies will be used to mind the semantic gap between environments (knowledge) and systems (information) representations:
Power-types: describe instances of categories (age, income, education, …).
Specialization and generalization: defined with regard of intent, subsets for individuals (wine, gender), sub-types for aspects (temperature, serve in menu).
Knowledge based relationships (dashed line): used to describe objects and phenomena, actual, planned, or expected (face recognition of customers, influence of weather on dishes, association of wines and dishes, …
Concepts: introduced to relate information and knowledge: gourmet.
Ontological descriptions
With the backbones of symbolic representations soundly anchored to environment, it would be possible to complement functional and logical models with their conceptual counterpart and by doing so to eliminate conceptual debts. A symmetric policy could be applied to refactoring in order to redeem the technical debt associated to legacy code.
Managing Conceptual Debt
Like financial ones, conceptual debts are facts of life that have to be managed on a continuous basis. That can be achieved using Open Concepts to span the gap between the conceptual representation of business environments, objectives, and activities (a), and models of functional and technical architectures (b). Ontologies (c) could then leverage a seamless integration of conceptual (CD) and technical (TD) debts and consequently further in-depth digital transformation.
Using Open Concepts to consolidate technical and conceptual debts
That would ensure:
A separate management of models directly tied to systems, and ontologies with broader justification.
A distinction between a kernel (aka knowledge engine), environment profiles, and business domains.
UML Actors (aka Roles) are meant to provide a twofold description of interactions between systems and their environment: organization and business process on one hand, system and applications on the other hand.
That can only be achieved by maintaining a conceptual distinction between actual agents, able to physically interact with systems, and actors (aka roles), which are their symbolic avatars as perceived by applications.
As far as the purpose is to describe interactions, actors should be primary characterized by the nature of language (symbolic or not), and identification coupling (biological or managed):
Symbolic communication, no biological identification (systems)
Analog communication, no biological identification (active devices or equipments)
Analog communication, biological identification (live organisms)
While there has been some confusion between actors (or roles) and agents, a clear-cut distinction is now a necessity due to the centrality of privacy issues, whether it is from business or regulatory perspective.
States are used to describe relevant aspects in contexts and how the changes are to affect systems representations and behaviors.
On that account, events and states are complementary: the former are to notify relevant changes, the latter are to represent the partitions (²) that makes these changes relevant. Transitions are used to map the causes and effects of changes.
State of physical objects.
State of processes’ execution.
State of actors’ expectations.
State of symbolic representations.
Beside alignment with events, defining states consistently across objects, processes, and actors is to significantly enhance the traceability and transparency of architectures designs.
Use cases and users’ stories being the two major approaches to requirements, outlining their respective scope and purpose should put projects on a sound basis.
Cases & Stories
To that end requirements should be neatly classified with regard to scope (enterprise or system) and level (architectures or processes).
Users stories are set at enterprise level independently of the part played by supporting systems.
Use cases cut across users stories and consider only the part played by supporting systems.
Business stories put users stories (and therefore processes) into the broader perspective of business models.
Business cases put use cases (and therefore applications) into the broader perspective of systems capabilities.
Position on that simple grid should the be used to identify stakeholders and pick between an engineering model, agile or phased.
“The little reed, bending to the force of the wind, soon stood upright again when the storm had passed over”
Aesop
Resilience and adaptability to changing environments (Masao Ido)
Preamble
The consequences of digital environments go well beyond a simple adjustment of business processes and call for an in-depth transformation of enterprise architectures.
To begin with, the generalization of digital environments bears out the Symbolic System modeling paradigm: to stay competitive, enterprises have to manage a relevant, accurate, and up-to-date symbolic representation of their business context and concerns.
With regard to architectures, it means a seamless integration of systems and knowledge architectures.
With regard to processes it means a built-in ability to learn from environments and act accordingly.
Such requirements for resilience and adaptability in unsettled environments are characteristic of the Pagoda architecture blueprint.
Pagoda Blueprint
As can be observed wherever high buildings are being erected on shaking grounds, Pagoda-like architectures set successive layers around a central pillar providing intrinsic strength and resilience to external upsets while allowing the floors to move with the whole or be modified independently. Applied to enterprise architectures in digital environments, that blueprint can be much more than metaphoric.
The actual relevance of the pagoda blueprint is best understood when the main of data, information, and knowledge is set across platforms, systems, and organization layers:
The Pagoda Architecture Blueprint is derived from the Zachman’s framework
That blueprint puts a new light on model based approaches to systems engineering (MBSE):
Conceptual models, targeting enterprises organization and business independently of supporting systems.
Logical models, targeting the symbolic objects managed by supporting systems as surrogates of business objects and activities.
Physical models, targeting the actual implementation of symbolic surrogates as binary objects.
Pagoda Blueprint & Digital Environments
The Pagoda blueprint gets a new relevance in the context of digital transformation.
Moreover, the blueprint is not limited to enterprise architectures and can be applied to every kind of systems:
Devices associated to physical platforms supporting analog communication through the Internet of Things (a).
Equipements associated to physical platforms controlled by systems, supporting digital communication (b) and functional alignment (c) .
Beside enterprise architectures, the Pagoda Blueprint can be applied to equipments, systems or devices.
That would greatly enhance the traceability and transparency of transformations induced by the immersion of enterprises in digital environments.
Systems & Knowledge Architectures
If digitized business flows are to pervade enterprise systems and feed business intelligence (BI), systems and knowledge architectures are to be merged into a single nervous system as materialized by the Pagoda central pillar:
Business Intelligence and Decision-making
Ubiquitous, massive, and unrelenting digitized business flows cannot be dealt with lest a clear distinction is maintained between raw data acquired across platforms, and the information (previously data) models which ensure the continuity and consistency of systems. .
Once structured and refined, business data flows must be blended with information models sustaining systems functionalities.
A comprehensive and business driven integration of organization and knowledge could then support strategic and operational decision-making at enterprise level.
Rounding off this nervous system with a brain, ontologies would provide the conceptual frame for models representing enterprises and their environments.
Agile Architectures & Homeostasis
Homeostasis is the ability of a viable organism to learn from their environment and adapt their behavior and structures according to changes. As such homeostasis can be understood as the extension of enterprise agility set in digital environments, ensuring:
Integrated decision-making processes across concerns (business, systems, platforms), and time-frames (tactical, operational, strategic, … ).
Integrated information processing, from data-mining to knowledge management.
To that end, changes should be differentiated with regard to source (business or technology environment, organization, systems) and flows (data, information, knowledge); that would be achieved with a pagoda blueprint.
Resilience and adaptability to changes
Threads of operational and strategic decision-making processes could then be weaved together, combining OODA loops at process level and economic intelligence at enterprise level.