Semantic Interoperability: Stories & Cases

Preamble

For all intents and purposes, digital transformation has opened the door to syntactic interoperability… and thus raised the issue of the semantic one.

Cooked Semantics (Urs Fisher)

To put the issue in perspective, languages combine four levels of interpretation:

  • Syntax: how terms can be organized.
  • Lexical: meaning of terms independently of syntactic constructs.
  • Semantic: meaning of terms in syntactic constructs.
  • Pragmatic: semantics in context of use.
Languages levels of interpretation

At first, semantic networks (aka conceptual graphs) appear to provide the answer; but that’s assuming flat ontologies (aka thesaurus) within which all semantics are defined at the same level. That would go against the objective of bringing the semantics of business domains and systems architectures under a single conceptual roof. The problem and a solution can be expounded taking users stories and use cases for examples.

Crossing stories & cases

Beside the difference in perspectives, users stories and use cases stand at a methodological crossroad, the former focused on natural language, the latter on modeling. Using ontologies to ensure semantic interoperability is to enhance both traceability and transparency while making room for their combination if and when called for.

Set at the inception of software engineering processes, users’ stories and use cases mark an inflexion point between business requirements and supporting systems functionalities: where and when are determined (a) the nature of interfaces between business processes and systems components and, (b) how to proceed with development models, iterative or model based.

Users’ stories are part and parcel of Agile development model, their backbone, engine, and fuel. But as far as Agile is concerned, users’ stories introduce a dilemma: once being told stories are meant to be directly and iteratively put down in code; documenting them in words would bring back traditional requirements and phased development. Hence the benefits of sorting out and writing up the intrinsic elements of stories as to ensure the continuity and consistency of engineering processes, whether directly to code, or through the mediation of use cases.

To that end semantic interoperability would have to be achieved for actors, events, and activities.

Actors & Events

Whatever architectures or modeling methodologies, actors and events are sitting on systems’ fences, which calls for semantics common to enterprise organization and business processes on one side of the fence, supporting systems on the other side.

To begin with events, the distinction between external and internal ones is straightforward for use cases, because their purpose is precisely to describe the exchanges between systems and environments. Not so for users stories because at their stage the part to be played by supporting systems is still undecided, and by consequence the distinction between external and internal events.

With regard to actors, and to avoid any ambiguity, a semantic distinction could be maintained between roles, defined by organizations, and actors (UML parlance), for roles as enacted by agents interacting with systems. While roles and actors are meant to converge with analysis, understandings may initially differ across the fence between users stories and use cases, to be reconciled at the end of the day.

Representations should support the semantic distinctions as well as trace their convergence.

That would enable use cases and users stories to share overlapping yet consistent semantics for primary actors and external events:

  • Across stories: actors contributing to different stories affected by the same events.
  • Along processes: use cases set for actors and events defined in stories.
  • Across time-frames: actors and events first introduced by use cases before being refined by “pre-sequel” users stories.

Such ontology-based representations are to support full iterative as well as parallel developments independently of the type of methods, diagrams or documents used by projects.

activities

Users’ stories and use cases are set in different perspectives, business processes for the former, supporting systems for the latter. As already noted, their scopes overlap for events and actors which can be defined upfront providing a double distinction between roles (enterprise view) and actors (systems view), and between external and internal events.

Activities raise more difficulties because they are meant to be defined and refined across the whole of engineering processes:

  • From business operations as described by users to business functions as conceived by stakeholders.
  • From business logic as defined in business processes to their realization as defined in diagram sequences.
  • From functional requirements (e.g users authentication or authorization) to quality of service.
  • From primitives dealing with integrity constraints to business policies managed through rules engines.

To begin with, if activities have to be consistently defined for both users’ stories and use cases, their footprint should tally the description of actors and events stipulated above; taking a leaf from Aristotle rule of the three units, activity units should therefore:

  • Be triggered by a single event initiated by a single primary actor.
  • Be located into a single physical space with all resources at hand.
  • Timed by a single clock controlling accesses to all resources.

On that basis, the refinement of descriptions could go according to the nature of requirements: business (users’ stories), or functional and quality of service (use cases) .

Activities (execution units) should be tally with roles, events, and location.
Use cases wrap computation independent activities into transactions.

As far as ontologies are concerned, the objective is to ensure the continuity and consistency of representations independently of modeling tools and methodologies. For activities appearing in users stories and use cases, that would require:

  • The description of activities in relation with their business background, their execution in processes, and the corresponding functions already supported by systems.
  • The progressive refinement of roles (users, devices, other systems), location, and resources (objects or surrogates).
  • An unified definition of alternatives in stories (branches) and use cases (extension points)

The last point is of particular importance as it will determine how business and functional rules are to be defined and control implemented.

Knitting semantics: symbolic representations

The scope and complexity of semantic interoperability can be illustrated a contrario by a simple activity (checking out) described at different levels with different methods (process, use case, user story), possibly by different people at different time.

The Check-out activity is first introduced at business level (process), next a specific application is developed with agile (user story), and then extended for variants according to channels (use case).

Semantic interoperability between projects, domains, and methods.

Assuming unfettered naming (otherwise semantic interoperability would be a windfall), three parties can be mentioned under various monikers for renters, drivers, and customers.

In a flat semantic context renter could be defined as a subtype of customer, itself a subtype of party. But that option would contradict the neutrality objective as there is no reason to assume a modeling consensus across domains, methods, and time.

  • The ontological kernel defines parties and actors, as roles associated to agents (organization level).
  • Enterprises define customers as parties (business model).
  • Business unit can defines renters in reference to customers (business process) or directly as a subtype of role (user story).
  • The distinction between renters and drivers can be introduced upfront or with use cases’ actors.

That would ensure semantic interoperability across modeling paradigms and business domains, and along time and transformations.

Probing semantics: metonymies and metaphors

Once established in-depth foundations, and assuming built-in basic logic and lexical operators, semantic interoperability is to be carried out with two basic linguistic contraptions: metonymies and metaphors .

Metonymies and metaphors are linguistic constructs used to substitute a word (or a phrase) by another without altering its meaning, respectively through extensions and intensions, the former standing for the actual set of objects and behaviors, the latter for the set of features that characterize these instances.

Metonymy relies on contiguity to substitute target terms for source ones, contiguity being defined with regard to their respective extensions. For instance, given that US Presidents reside at the White House, Washington DC, each term can be used instead.

Metonymy use physical or functional proximity (full line) to match extensions (dashed line)

Metaphor uses similarity to substitute target terms for source ones, similarity being defined with regard to a shared subset of features, with leftovers taken out of the picture.

Metaphor uses similarity to match descriptions

Compared to basic thesaurus operators for synonymy, antonymy, and homonymy, which are set at lexical level, metonymy and metaphor operate at conceptual level, the former using set of instances (extensions) to probe semantics, the latter using descriptions (intensions).

Applied to users stories and use cases:

  • Metonymies: terms would be probed with regard to actual sets of objects, actors, events, and execution paths (data from operations) or mined from digital environments.
  • Metaphors: terms for stories, cases, actors, events, and activities would be probed with regard to the structure and behavior of associated descriptions (intensions).

Compared to the shallow one set at thesaurus level for terms, deep semantic interoperability encompasses all ontological dimensions, from actual instances to categories, aspects, and concepts. As such it can take full advantage of digital transformation and deep learning technologies.

further reading

Digital Strategy

Preamble

The digital transformation induces fundamental changes for the exchanges between enterprises and their environment.

To begin with, their immersion into digital environments means that the traditional fences surrounding their IT systems are losing their relevance, being bypassed by massive data flows to be processed without delay.

Then, the induced osmosis upturns the competition playground and compels drastic changes in governance: less they fall behind, enterprises have to redefine their organization, systems, and processes.

Strategies are meant to draw passageways between current circumstances and conjectured horizons (Kader Attia)

New playground

Strategic thinking is first and foremost making differences with regard to markets, resources and assets, and time-frames. But what makes the digital revolution so disruptive is that it resets the ways differences are made:

  • Markets: the traditional distinctions between products and services are all but forgotten.
  • Resources and assets: with software, smart or otherwise, now tightly mixed in products fabric, and business processes now driven by knowledge, intangible assets are taking the lead on conventional ones.
  • Time-frames: strategies have for long been defined as a combination of anticipations, objectives and policies whose scope extends beyond managed horizons. But digital osmosis and the ironing out of markets and assets traditional boundaries are dissolving the milestones used to draw horizons perspectives.

New perspectives

To overcome these challenges enterprises strategies should focus on four pillars:

  • Governance: the immersion of enterprises in digital environments and the crumbling of traditional fences require in-depth changes in the assessment of enterprises capability and maturity, putting the focus on the ability to change.
  • Data and Information: massive and continuous inflows of data calls for a seamless integration of data analytics (perception), information models (reasoning), and knowledge (decision-making).
  • Security & Confidentiality: new regulatory environments and costs of privacy breaches call for a clear distinction between data tied to identified individuals and information associated to designed categories.
  • Innovation: digital environments induce a new order of magnitude for the pace of technological change. Making opportunities from changes can only be achieved through collaboration mechanisms harnessing enterprise knowledge management to environments intakes.

FURTHER READING

Focus: Data vs Information

Preamble

Distinctions must serve a purpose and be assessed accordingly. On that account, what would be the point of setting apart data and information, and on what basis could that be done.


From Data Stripes to Information Structure (Victor Vasarely)

Until recently the two terms seem to have been used indifferently; until, that is, the digital revolution. But the generalization of digital surroundings and the tumbling down of traditional barriers surrounding enterprises have upturned the playground as well as the rules of the game.

Previously, with data analytics, information modeling, and knowledge management mostly carried out as separate threads, there wasn’t much concerns about semantic overlaps; no more. Lest they fall behind, enterprises have to combine observation (data), reasoning (information), and judgment (knowledge) as a continuous process. But such integration implies in return more transparency and traceability with regard to resources (e.g external or internal) and objectives (e.g operational or strategic); that’s when a distinction between data and information becomes necessary.

Economics: Resources vs Assets

Understood as a whole or separately, there is little doubt that data and information have become a key success factor, calling for more selective and effective management schemes.

Being immersed in digital environments, enterprises first depend on accurate, reliable, and timely observations of their business surroundings. But in the new digital world the flows of data are so massive and so transient that mining meaningful and reliable pieces is by itself a decisive success factor. Next, assuming data flows duly processed, part of the outcome has to be consolidated into models, to be managed on a persistent basis (e.g customer records or banking transactions), the rest being put on temporary shelves for customary uses, or immediately thrown away (e.g personal data subject to privacy regulations). Such a transition constitutes a pivotal inflexion point for systems architectures and governance as it sorts out data resources with limited lifespan from information assets with strategic relevance. Not to mention the sensibility of regulatory compliance to data management.

Processes: Operations vs Intelligence

Making sense of data is pointless without putting the resulting information to use, which in digital environments implies a tight integration of data and information processing. Yet, as already noted, tighter integration of processes calls for greater traceability and transparency, in particular with regard to the origin and scope: external (enterprise business and organization) or internal (systems). The purposes of data and information processing can be squared accordingly:

  • The top left corner is where business models and strategies are meant to be defined.
  • The top right corner corresponds to traditional data or information models derived from business objectives, organization, and requirement analysis.
  • The bottom line correspond to analytic models for business (left) and operations (right).

Squaring the purposes of Data & Information Processing

That view illustrates the shift of paradigm induced by the digital transformation. Prior, most mappings would be set along straight lines:

  • Horizontally (same nature), e.g requirement analysis (a) or configuration management (b). With source and destination at the same level, the terms employed (data or information) have no practical consequence.
  • Vertically (same scope), e.g systems logical to physical models (c) or business intelligence (d). With source and destination set in the same semantic context the distinction (data or information) can be ignored.

The digital transformation makes room for diagonal transitions set across heterogeneous targets, e.g mapping data analytics with conceptual or logical models (e).

That double mix of levels and scopes constitutes the nexus of decision-making processes; their transparency is contingent on a conceptual distinction between data and information.

At operational level the benefits of the distinction are best expressed through what is commonly known as the OODA (Observation, Orientation, Decision, Action) loop:

  • Data is used to align operations (systems) with observations (territories).
  • Information is used to align categories (maps) with objectives.

Roles of Data (red) & Information (blue) in integrated decision-making processes

Then, the conceptual distinction between data and information is instrumental for the integration of operational and strategic decision-making processes:

  • Data analytics feeding business intelligence
  • Information modeling supporting operational assessment.

Not by chance, these distinctions can be aligned with architecture layers.

Architectures: Instances vs Categories

Blending data with information overlooks a difference of nature, the former being associated with actual instances (external observation or systems operations), the latter with symbolic descriptions (categories or types). That intrinsic difference can be aligned with architecture layers (resources are consumed, assets are managed), and decision-making processes (operations deal with instances, strategies with categories).

With regard to architectures, the relationship between instances (data) and categories (information) can be neatly aligned with capability layers, as represented by the Pagoda blueprint:

  • The platform layer deals with data reflecting observations (external facts) and actions (system operations).
  • The functional layer deals with information, i.e the symbolic representation of business and organization categories.
  • The business and organization layer defines the business and organization categories.

It must also be noted that setting apart what pertains to individual data independently of the information managed by systems clearly props up compliance with privacy regulations.


Architectures & Decision-making

With regard to decision-making processes, business intelligence uses the distinction to integrate levels, from operations to strategic planning, the former dealing with observations and operations (data), the latter with concepts and categories (information and knowledge).

Representations: Knowledge Architecture

As noted above, the distinction between data and information is a necessary counterpart of the integration of operational and intelligence processes; that implies in return to bring data, information, and knowledge under a common conceptual roof, respectively as resources, assets, and service:

  1. Resources: data is captured through continuous and heterogeneous flows from a wide range of sources.
  2. Assets: information is built by adding identity, structure, and semantics to data.
  3. Services: knowledge is information put to use through decision-making.

Ontologies, which are meant to encompass all and every kind of knowledge, are ideally suited for the management of whatever pertains to enterprise architecture, thesaurus, models, heuristics, etc.

CaKe_DataInfoKnow

That approach has been tested with the Caminao ontological kernel using OWL2; a beta version is available for comments on the Stanford/Protégé portal with the link: Caminao Ontological Kernel (CaKe 2.0).

Conclusion: From Metadata to Machine Learning

The significance of the distinction between data and information shows up at the two ends of the spectrum:

On one hand, it straightens the meaning of metadata, to be understood as attributes of observations independently of semantics, a dimension that plays a critical role in machine learning.

On the other hand, enshrining the distinction between what can be known of individuals facts or phenomena and what can be abstracted into categories is to enable an open and dynamic knowledge management, also a critical requisite for machine learning.

FURTHER READING

External Links

Redeeming Conceptual Debts

Preamble

To take advantage of their immersion into digital environments enterprises have to differentiate between data (environment’s facts), information (systems’ representations), and knowledge (enterprise behavior).

Outside / Insight (Anna Hulacova)

That cannot be achieved without ironing out the semantic discrepancies between corresponding representations.

Symbolic Representations

Along with the Symbolic System modeling paradigm, the aim of computer systems is to manage the symbolic representations of business objects and processes pertaining to enterprises contexts and concerns. That view can be summarized in terms of maps and territories:

Maps and territories of systems and their environment

Behind the various labels and modus operandi, maps can be defined on three basic layers:

  • Conceptual models, targeting enterprises organization and business independently of supporting systems.
  • Logical models, targeting the symbolic objects managed by supporting systems as surrogates of business objects and activities.
  • Physical models, targeting the actual implementation of symbolic surrogates as binary objects.

The Pagoda Architecture Blueprint is derived from the Zachman’s framework

These maps can be aligned with commonly agreed enterprise architecture layers, respectively for organizations and processes, systems functionalities, and platforms, with a fourth added for analytical models of business environments.

Conceptual Debt

Ideally, that alignment should pave the way to the integration of systems and knowledge architectures, as represented by the Pagoda blueprint:

Insofar as systems engineering is concerned, that would require two kinds of transformations: from conceptual to logical models (aka analysis), and from logical to physical models (aka design).

While the latter is just a matter of expertise (thank to the GoF), that’s not the case for the former which has to deal with a semantic gap between descriptions of specific and changing business domains and organizations on one side, generic and stable systems architectures on the other side.

As a result, what can be termed a conceptual debt has accumulated with the the number of logical models supporting physical ones without the backing of relevant ones for business or organization. The objective is therefore to bring all models into a broader knowledge architecture.

Models & Ontologies

As introduced by Greek philosophers, ontologies are systematic accounts of whatever is known about a domain of concern. From that point, three basic observations can be made:

  1. Ontologies are made of categories of things, beings, or phenomena; as such they may range from lexicon or simple catalogs to philosophical doctrines.
  2. Ontologies are driven by cognitive (i.e non empirical) purposes, namely the validity and consistency of symbolic representations.
  3. Ontologies are meant to be directed at specific domains of concerns, whatever their epistemic nature: engineering, business, politics, religions, mythologies, astrology, etc.

With regard to models, only the second observation puts ontologies apart: compared to models, ontologies are about understanding and are not necessarily driven by empirical purposes.

On that account ontologies appear as an option of choice for the integration of symbolic representations:

  • Data: instances identified at territory level, associated with terms or labels; they are mapped to business intelligence (environments) and operational (systems) models.
  • Information: categories associated with sets of instances; categories can be used for requirements analysis or software design.
  • Knowledge: ideas or concepts connect changing and overlapping sets of terms and categories; documents can be associated to any kind of item.

With models consistently mapped to ontologies, the conceptual debt could be restructured in the broader context of enterprise knowledge architecture.

Ontologies & Knowledge

As expounded by Davis, Shrobe, and Szolovits in their pivotal article, knowledge is made of five constituents:

  1. Surrogates, used as symbolic counterparts of actual objects and phenomena.
  2. Ontological commitments defining the categories of things that may exist in the domain under consideration.
  3. Fragmentary theory of intelligent reasoning defining what things can do or can be done with.
  4. Medium making knowledge understandable by computers.
  5. Medium making knowledge understandable by humans.

Points 1 and 5 are not concerned by the conceptual gap, the former being dealt with through the anchoring of identified individuals to surrogates (see below), and the latter being with human interfaces. That leaves points 2-4 as the conceptual hub where information models have to be integrated into knowledge architecture.

Assuming RDF (Resource Description Framework) graphs are used for knowledge representation (point 4), and taking a restaurant for example, the contents of information models (point 2) will be denoted by:

  • Primary nodes (rectangles), for elements specific to cooking and customers relationship management, to be decorated with features (bottom right).
  • Connection nodes (circles and arrows), for semantically neutral (aka syntactic) associations to be uniformly implemented across domains, e.g with predicate calculus (bottom left).
  • Semantic connectors supporting both syntactic and semantic associations (bottom, middle). 

Inserting information into knowledge architecture

Using ontologies to integrate models into knowledge architecture is to enable the restructuring of the conceptual debt.

Minding Semantic Gaps

Keeping with the financial metaphor, conceptual debts can be expressed in terms of spreads between models, and as such could be restructured through models transformation.

To begin with, all representations have to be anchored to environments through identified (#) instances.


Anchoring systems to their environment

Then, instances are to be associated to categories according to features
(properties or relationships) :

  • Customers, reservations, tables, and waiters are identified individuals managed through symbolic surrogates.
  • Names of dishes and ingredients do not refer to symbolic surrogates representing business objects, but are just labels pointing to recipes (documents).
  • Idem for the names of wines, except for exceptional vintages with identified bottles to be managed through symbolic surrogates.

As defined above, these models can be equivalently expressed as ontologies:

  • Properties are single-valued attributes.
  • Relationships define links between categories.
  • Aspects are structured sets of features meant to be valued through category instances.
  • Documents are contents to be accessed directly or through networks, (e.g preparations or wine reviews).
Fleshing out model backbone with features, relationships, and documents (black, italic)

It must be noted that the distinction between neutral and specific contents is not meant to be universal but be justified by pragmatic concerns, for instance:

  • Addresses are not defined as aspects but as category instances so that surrogates of actual addresses can be used to optimize deliveries.
  • Links to customers and addresses, being self-explanatory, can be defined as non specific.
  • The relationship from dishes to ingredients is structured and specific.

Sorting out truth-preserving constructs from domain specific ones is a key success factor for models transformation, and consequently debt restructuring.

Restructuring The Debts

Restructuring financial debts means redefining assets and incomes; with regard to systems it would mean reassessing architectures with regard to value chains.

To begin with, the Pagoda blueprint central pillar is to support the integration of systems and knowledge architectures and consequently the dynamic alignment of systems capabilities, meant to be stable and shared, with business opportunities, by nature changing and specific.

Then, the pairing of systems and knowledge architectures, like a DNA double helix, is to be used to restructure both technical and conceptual debts.


Pairing assets and incomes across architectures

With regard to technical debts, restructuring isn’t to present significant difficulties:

  • Pairing income flows (applications) to tangible assets (platforms) can be done at data level.
  • Model transformations between data (code) and information (models) levels can be achieved using homogeneous domain specific and programming languages.

Things are more complex with conceptual debts, for pairing as well as transformations:

  • There is no direct pairing because value chains (processes) are set across assets (organization).
  • Model transformations are to bridge the semantic gap between the
    symbolic representations of environments (knowledge) and systems (information) .

Nonetheless, these difficulties can be overcame combining integrated architectures and ontologies.

Regarding the structure of the conceptual debt, the income part is to be defined through business objectives (customers, products, channels, supply chain, etc.), and assets to be defined by corresponding enterprise architectures capabilities.

How to mind the gap between external and systems representations.

Regarding models transformations, ontologies will be used to mind the semantic gap between environments (knowledge) and systems (information) representations:

  • Power-types: describe instances of categories (age, income, education, …).
  • Specialization and generalization: defined with regard of intent, subsets for individuals (wine, gender), sub-types for aspects (temperature, serve in menu).
  • Knowledge based relationships (dashed line): used to describe objects and phenomena, actual, planned, or expected (face recognition of customers, influence of weather on dishes, association of wines and dishes, …
  • Concepts: introduced to relate information and knowledge: gourmet.
Ontological descriptions

With the backbones of symbolic representations soundly anchored to environment, it would be possible to complement functional and logical models with their conceptual counterpart and by doing so to eliminate conceptual debts. A symmetric policy could be applied to refactoring in order to redeem the technical debt associated to legacy code.

Managing Conceptual Debt

Like financial ones, conceptual debts are facts of life that have to be managed on a continuous basis. That can be achieved using Open Concepts to span the gap between the conceptual representation of business environments, objectives, and activities (a), and models of functional and technical architectures (b). Ontologies (c) could then leverage a seamless integration of conceptual (CD) and technical (TD) debts and consequently further in-depth digital transformation.

Using Open Concepts to consolidate technical and conceptual debts

That would ensure:

  • A separate management of models directly tied to systems, and ontologies with broader justification.
  • A distinction between a kernel (aka knowledge engine), environment profiles, and business domains.
EA & Knowledge Management

Further Reading

EA & The Pagoda Blueprint

“The little reed, bending to the force of the wind, soon stood upright again when the storm had passed over”

Aesop

Resilience and adaptability to changing environments (Masao Ido)

Preamble

The consequences of digital environments go well beyond a simple adjustment of business processes and call for an in-depth transformation of enterprise architectures. 

To begin with, the generalization of digital environments bears out the Symbolic System modeling paradigm: to stay competitive, enterprises have to manage a relevant, accurate, and up-to-date symbolic representation of their business context and concerns. 

With regard to architectures, it means a seamless integration of systems and knowledge architectures.

With regard to processes it means a built-in ability to learn from environments and act accordingly.

Such requirements for resilience and adaptability in unsettled environments are characteristic of the Pagoda architecture blueprint.

Pagoda Blueprint

As can be observed wherever high buildings are being erected on shaking grounds, Pagoda-like architectures set successive layers around a central pillar providing intrinsic strength and resilience to external upsets while allowing the floors to move with the whole or be modified independently. Applied to enterprise architectures in digital environments, that blueprint can be much more than metaphoric.

The actual relevance of the pagoda blueprint is best understood when the main of data, information, and knowledge is set across platforms, systems, and organization layers:


The Pagoda Architecture Blueprint is derived from the Zachman’s framework

That blueprint puts a new light on model based approaches to systems engineering (MBSE):

  • Conceptual models, targeting enterprises organization and business independently of supporting systems.
  • Logical models, targeting the symbolic objects managed by supporting systems as surrogates of business objects and activities.
  • Physical models, targeting the actual implementation of symbolic surrogates as binary objects.

Pagoda Blueprint & Digital Environments

The Pagoda blueprint gets a new relevance in the context of digital transformation.

Moreover, the blueprint is not limited to enterprise architectures and can be applied to every kind of systems:

  • Devices associated to physical platforms supporting analog communication through the Internet of Things (a).
  • Equipements associated to physical platforms controlled by systems, supporting digital communication (b) and functional alignment (c) .
Beside enterprise architectures, the Pagoda Blueprint can be applied to equipments, systems or devices.

That would greatly enhance the traceability and transparency of transformations induced by the immersion of enterprises in digital environments.

Systems & Knowledge Architectures

If digitized business flows are to pervade enterprise systems and feed business intelligence (BI), systems and knowledge architectures are to be merged into a single nervous system as materialized by the Pagoda central pillar:

Business Intelligence and Decision-making
  • Ubiquitous, massive, and unrelenting digitized business flows cannot be dealt with lest a clear distinction is maintained between raw data acquired across platforms, and the information (previously data) models which ensure the continuity and consistency of systems.  .  
  • Once structured and refined, business data flows must be blended with information models sustaining systems functionalities.
  • A comprehensive and business driven integration of organization and knowledge could then support strategic and operational decision-making at enterprise level.

Rounding off this nervous system with a brain, ontologies would provide the conceptual frame for models representing enterprises and their environments.

Agile Architectures & Homeostasis

Homeostasis is the ability of a viable organism to learn from their environment and adapt their behavior and structures according to changes.
As such homeostasis can be understood as the extension of enterprise agility set in digital environments, ensuring:

  • Integrated decision-making processes across concerns (business, systems, platforms), and time-frames (tactical, operational, strategic, … ).
  • Integrated information processing, from data-mining to knowledge management.

To that end, changes should be differentiated with regard to source (business or technology environment, organization, systems) and flows (data, information, knowledge); that would be achieved with a pagoda blueprint.

Resilience and adaptability to changes

Threads of operational and strategic decision-making processes could then be weaved together, combining OODA loops at process level and economic intelligence at enterprise level.

Further Reading

Squared Outline: Ontologies

The primary aim of ontologies is to bring under a single roof three tiers of representations and to ensure both their autonomy and integrated usage:

  • Thesauruses federate meanings across digital (observations) and business (analysis) environments
  • Models deal with the integration of digital flows (environments) and symbolic representations (systems)
  • Ontologies ensure the integration of enterprises’ data, information, and knowledge, enabling a long-term alignment of enterprises’ organization, systems, and business objectives

To ensure their interoperability, tiers should be organized according to linguistic capabilities: lexical, syntactic, semantic, pragmatic.

To ensure their conceptual integration, ontologies must maintain an explicit distinction between:

  • Concepts: pure semantic constructs defined independently of instances or categories
  • Categories: symbolic descriptions of sets of objects or phenomena: Categories can be associated with actual (descriptive, extensional) or intended (prescriptive, intensional) sets of instances
  • Facts: observed objects or phenomena 
  • Documents: entries for documents represent the symbolic contents, with instances representing the actual (or physical) documents

As for technical integration, it is can be achieved through graphical neural networks (GNN) and Resource description framework (RDF).

FURTHER READING

Stories In Logosphere

As championed by a brave writer, should we see the Web as a crib for born again narratives, or as a crypt for redundant texts.

melikOhanian_letters
Crib or Crypt (Melik Ohanian)

Once Upon A Time

Borrowing from Einstein, “the only reason for time is so that everything doesn’t happen at once.” That befits narratives: whatever the tale or the way it is conveyed, stories take time. Even if nothing happens, a story must be spelt in tempo and can only be listened to or read one step at a time.

In So Many Words

Stories have been told before being written, which is why their fabric is made of words, and their motifs weaved by natural languages. So, even if illustrations may adorn printed narratives, the magic of stories comes from the music of their words.

A Will To Believe

To enjoy a story, listeners or readers are to detach their mind from what they believe about reality, replacing dependable and well-worn representations with new and untested ones, however shaky or preposterous they may be; and that has to be done through an act of will.

Stories are make-beliefs: as with art in general, their magic depends on the suspension of disbelief. But suspension is not abolition; while deeply submerged in stories, listeners and readers maintain some inward track to the beliefs they left before diving; wandering a cognitive fold between surface truths and submarine untruths, they seem to rely on a secure if invisible tether to the reality they know. On that account, the possibility of an alternative reality is to transform a comforting fold into a menacing abyss, dissolving their lifeline to beliefs. That could happen to stories told through the web.

Stories & Medium

Assuming time rendering, stories were not supposed to be affected by medium; that is, until McLuhan’s suggestion of medium taking over messages. Half a century later internet and the Web are bringing that foreboding in earnest by melting texts into multimedia documents.

Tweets and Short Message Services (SMS) offer a perfect illustration of the fading of text-driven communication, evolving from concise (160 characters) text-messaging  to video-sharing.

That didn’t happen by chance but reflects the intrinsic visual nature of web contents, with dire consequence for texts: once lording it over entourages of media, they are being overthrown and reduced to simple attachments, just a peg above fac-simile. But then, demoting texts to strings of characters makes natural languages redundant, to be replaced by a web Esperanto.

Web Semantic Silos

With medium taking over messages, and texts downgraded to attachments, natural languages may lose their primacy for stories conveyed through the web, soon to be replaced by the so-called “semantic web”, supposedly a lingua franca encompassing the whole of internet contents.

As epitomized by the Web Ontology Language (OWL), the semantic web is based on a representation scheme made of two kinds of nodes respectively for concepts (squares) and conceptual relations (circles).

Scratch_WarPeace00
Semantic graphs (aka conceptual networks) combine knowledge representation (blue, left) and domain specific semantics (green, center & right)

Concept nodes are meant to represent categories specific to domains (green, right); that tallies with the lexical level of natural languages.

Connection nodes are used to define two types of associations:

  • Semantically neutral constructs to be applied uniformly across domains; that tallies with the syntactic level of natural languages (blue, left).
  • Domain specific relationships between concepts; that tallies with the semantic level of natural languages (green, center).

The mingle of generic (syntactic) and specific (semantic) connectors induces a redundant complexity which grows exponentially when different domains are to be combined, due to overlapping semantics. Natural languages typically use pragmatics to deal with the issue, but since pragmatics scale poorly with exponential complexity, they are of limited use for semantic web; that confines its effectiveness to silos of domain specific knowledge.

Scratch_silos.jpg
Natural Language Pragmatics As Bridges Across Domain Specific Silos

But semantic silos are probably not the best nurturing ground for stories.

Stories In Cobwebs

Taking for granted that text, time, and suspension of disbelief are the pillars of stories, their future on the web looks gloomy:

  • Texts have no status of their own on the web, but only appear as part of documents, a media among others.
  • Stories can bypass web practice by being retrieved before being read as texts or viewed as movies; but whenever they are “browsed” their intrinsic time-frame and tempo are shattered, and so is their music.
  • If lying can be seen as an inborn human cognitive ability, it cannot be separated from its role in direct social communication; such interactive background should also account for the transient beliefs in fictional stories. But lies detached from a live context and planted on the web are different beasts, standing on their own and bereft of any truth currency that could separate actual lies from fictional ones.

That depressing perspective is borne out by the tools supposed to give a new edge to text processing:

Hyper-links are part and parcel of internet original text processing. But as far and long as stories go, introducing links (hardwired or generated) is to hand narrative threads over to readers, and by so transforming them into “entertextment” consumers.

Machine learning can do wonders mining explicit and implicit meanings from the whole of past and present written and even spoken discourses. But digging stories out is more about anthropology or literary criticism than about creative writing.

As for the semantic web, it may work as a cobweb: by getting rid of pragmatics, deliberately or otherwise, it disables narratives by disengaging them from their contexts, cutting them out in one stroke from their original meaning, tempo, and social currency.

The Deconstruction of Stories

Curiously, what the web may do to stories seems to reenact a philosophical project which gained some favor in Europe during the second half of the last century. To begin with, the  deconstruction  philosophy was rooted in literary criticism, and its objective was to break the apparent homogeneity of narratives in order to examine the political, social, or ideological factors at play behind. Soon enough, a core of upholders took aim at broader philosophical ambitions, using deconstruction to deny even the possibility of a truth currency.

With the hindsight on initial and ultimate purposes of the deconstruction project, the web and its semantic cobweb may be seen as the stories nemesis.

Further Reading

External Links

Boost Your Mind Mapping

Preamble

Turning thoughts into figures faces the intrinsic constraint of dimension: two dimensional representations cannot cope with complexity.

van der Straet, Jan, 1523-1605; A Natural Philosopher in His Study
Making his mind about knowledge dimensions: actual world, descriptions, and reproductions (Jan van der Straet)

So, lest they be limited to flat and shallow thinking, mind cartographers have to introduce the cognitive equivalent of geographical layers (nature, demography, communications, economy,…), and archetypes (mountains, rivers, cities, monuments, …)

Nodes: What’s The Map About

Nodes in maps (aka roots, handles, …) are meant to anchor thinking threads. Given that human thinking is based on the processing of symbolic representations, mind mapping is expected to progress wide and deep into the nature of nodes: concepts, topics, actual objects and phenomena, artifacts, partitions, or just terms.

Mindmap00
What’s The Map About

It must be noted that these archetypes are introduced to characterize symbolic representations independently of domain semantics.

Connectors: Cognitive Primitives

Nodes in maps can then be connected as children or siblings, the implicit distinction being some kind of refinement for the former, some kind of equivalence for the latter. While such a semantic latitude is clearly a key factor of creativity, it is also behind the poor scaling of maps with complexity.

A way to frame complexity without thwarting creativity would be to define connectors with regard to cognitive primitives, independently of nodes’ semantics:

  • References connect nodes as terms.
  • Associations: connect nodes with regard to their structural, functional, or temporal proximity.
  • Analogies: connect nodes with regard to their structural or functional similarities.

At first, with shallow nodes defined as terms, connections can remain generic; then, with deeper semantic levels introduced, connectors could be refined accordingly for concepts, documentation, actual objects and phenomena, artifacts,…

Mindmap11
Connectors are aligned with basic cognitive mechanisms of metonymy (associations) and analogy (similarities)

Semantics: Extensional vs Intensional

Given mapping primitives defined independently of domains semantics, the next step is to take into account mapping purposes:

  • Extensional semantics deal with categories of actual instances of objects or phenomena.
  • Intensional semantics deal with specifications of objects or phenomena.

That distinction can be applied to basic semantic archetypes (people, roles, events, …) and used to distinguish actual contexts, symbolic representations, and specifications, e.g:

Mindmap20xi
Extensions (full border) are about categories of instances, intensions (dashed border) are about specifications
  • Car (object) refers to context, not to be confused with Car (surrogate) which specified the symbolic counterpart: the former is extensional (actual instances), the latter intensional (symbolic representations)
  • Maintenance Process is extensional (identified phenomena), Operation is intensional (specifications).
  • Reservation and Driver are symbolic representations (intensional), Person is extensional (identified instances).

It must be reminded that whereas the choice is discretionary and contingent on semantic contexts and modeling purposes (‘as-it-is’ vs ‘as-it-should-be’), consequences are not because the choice is to determine abstraction semantics.

For example, the records for cars, drivers, and reservations are deemed intensional because they are defined by business concerns. Alternatively, instances of persons and companies are defined by contexts and therefore dealt with as extensional descriptions.

Abstractions: Subsets & Sub-types

Thinking can be characterized as a balancing act between making distinctions and managing the ensuing complexity. To that end, human edge over other animal species is the use of symbolic representations for specialization and generalization.

That critical mechanism of human thinking is often overlooked by mind maps due to a confused understanding of inheritance semantics:

  • Strong inheritance deals with instances: specialization define subsets and generalization is defined by shared structures and identities.
  • Weak inheritance deals with specifications: specialization define sub-types and generalization is defined by shared features.
Mindmap30
Inheritance semantics: shared structures (dark) vs shared features (white)

The combination of nodes (intension/extension) and inheritance (structures/features) semantics gives cartographers two hands: a free one for creative distinctions, and a safe one for the ensuing complexity. e.g:

  • Intension and weak inheritance: environments (extension) are partitioned according to regulatory constraints (intension); specialization deals with subtypes and generalization is defined by shared features.
  • Extension and strong inheritance: cars (extension) are grouped according to motorization; specialization deals with subsets and generalization is defined by shared structures and identities.
  • Intension and strong inheritance: corporate sub-type inherits the identification features of type Reservation (intension).

Mind maps built on these principles could provide a common thesaurus encompassing the whole of enterprise data, information and knowledge.

Intelligence: Data, Information, Knowledge

Considering that mind maps combine intelligence and cartography, they may have some use for enterprise architects, in particular with regard to economic intelligence, i.e the integration of information processing, from data mining to knowledge management and decision-making:

  • Data provide the raw input, without clear structures or semantics (terms or aspects).
  • Categories are used to process data into information on one hand (extensional nodes), design production systems on the other hand (intensional nodes).
  • Abstractions (concepts) makes knowledge from information by putting it to use.

Conclusion

Along that perspective mind maps could serve as front-ends for enterprise architecture ontologies, offering a layered cartography that could be organized according to concerns:

Enterprise architects would look at physical environments, business processes, and functional and technical systems architectures.

mups_Layers
Using layered maps to visualize enterprise architectures

Knowledge managers would take a different perspective and organize the maps according to the nature and source of data, information, and knowledge.intelligence w

mups_Ontos
Using layered maps to build economic intelligence

As demonstrated by geographic information systems, maps built on clear semantics can be combined to serve a wide range of purposes; furthering the analogy with geolocation assistants, layered mind maps could be annotated with punctuation marks (e.g ?, !, …) in order to support problem-solving and decision-making.

Further Reading

External Links

A Brief Ontology Of Time

“Clocks slay time… time is dead as long as it is being clicked off by little wheels; only when the clock stops does time come to life.”

William Faulkner

Preamble

The melting of digital fences between enterprises and business environments is putting a new light on the way time has to be taken into account.

Joseph_Koudelka_time
Time is what happens between events (Josef Koudelka)

The shift can be illustrated by the EU GDPR: by introducing legal constraints on the notifications of changes in personal data, regulators put systems’ internal events on the same standing as external ones and make all time-scales equal whatever their nature.

Ontological Limit of WC3 Time Recommendation

The W3C recommendation for OWL time description is built on the well accepted understanding of temporal entity, duration, and position:

Cake_time

While there isn’t much to argue with what is suggested, the puzzle comes from what is missing, namely the modalities of time: the recommendation makes use of calendars and time-stamps but ignores what is behind, i.e time ontological dimensions.

Out of the Box

As already expounded (Ontologies & Enterprise Architecture) ontologies are at their best when a distinction can be maintained between representation and semantics. That point can be illustrated here by adding an ontological dimension to the W3C description of time:

  1. Ontological modalities are introduced by identifying (#) temporal positions with regard to a time-frame.
  2. Time-frames are open-ended temporal entities identified (#) by events.

Cake_timeOnto
How to add ontological modalities to time

It must be noted that initial truth-preserving properties still apply across ontological modalities.

Conclusion: OWL Descriptions Should Not Be Confused With Ontologies

Languages are meant to combine two primary purposes: communication and symbolic representation, some (e.g natural, programming) being focused on the former, other (e.g formal, specific) on the latter.

The distinction is somewhat blurred with languages like OWL (Web Ontology Language) due to the versatility and plasticity of semantic networks.

Ontologies and profiles are meant to target domains, profiles and domains are modeled with languages, including OWL.

That apparent proficiency may induce some confusion between languages and ontologies, the former dealing with the encoding of time representations, the latter with time modalities.

Further Readings

External Links

GDPR Ontological Primer

Preamble

European Union’s General Data Protection Regulation (GDPR), to come into effect  this month, is a seminal and momentous milestone for data privacy .

Nothing Personal (Arthur Szyk)

Yet, as reported by Reuters correspondents, European enterprises and regulators are not ready; more worryingly, few (except consultants) are confident about GDPR direction.

Misgivings and uncertainties should come as no surprise considering GDPR’s two innate challenges:

  • Regulating privacy rights represents a very ambitious leap into a digital space now at the core of corporate business strategies.
  • Compliance will not be put under a single authority but be overseen by an assortment of national and regional authorities across the European Union.

On that account, ontologies appear as the best (if not the only) conceptual approach able to bring contexts (EU nations), concerns (business vs privacy), and enterprises (organization and systems) into a shared framework.

Enterprise Architectures & Regulations

Compared to domain specific regulations, GDPR  is a governance-oriented regulation set across business concerns and enterprise organization; but unlike similarly oriented ones like accounting, GDPR is aiming at the nexus of business competition, namely the processing of data into information and knowledge. With such a strategic stake, compliance is bound to become a game-changer cutting across business intelligence, production systems, and decision-making. Hence the need for an integrated, comprehensive, and consistent approach to the different dimensions involved:

  • Concepts upholding businesses, organizations, and regulations.
  • Documentation with regard to contexts and statutory basis.
  • Regulatory options and compliance assessments
  • Enterprise systems architecture and operations

Moreover, as for most projects affecting enterprise architectures, carrying through GDPR compliance is to involve continuous, deep, and wide ranging changes that will have to be brought off without affecting overall enterprise performances.

Ontologies arguably provide a conclusive solution to the problem, if only because there is no other way to bring code, models, documents, and concepts under a single roof. That could be achieved by using ontologies profiles to frame GDPR categories along enterprise architectures models and components.

CakeGDPR_00.jpg
Basic GDPR categories and concepts (black color) as framed by the Caminao Kernel

Compliance implementation could then be carried out iteratively across four perspectives:

  • Personal data and managed information
  • Lawfulness of activities
  • Time and Events
  • Actors and organization.

Data & Information

To begin with, GDPR defines ‘personal data’ as “any information relating to an identified or identifiable natural person (‘data subject’)”. Insofar as logic is concerned that definition implies an equivalence between ‘data’ and ‘information’, an assumption clearly challenged by the onslaught of big data: if proofs were needed, the Cambridge Analytica episode demonstrates how easy raw data can become a personal affair.

Ideally, a formal and functional distinction should be maintained between data and information, the former tied to transient observations of external instances, the latter defined by the categories of managed surrogates. 

Failing an enshrined distinction between data (directly associated to instances) and information (set according to categories or types), an ontological level of indirection should be managed between regulatory intents and the actual semantics of data as managed by information systems.

CakeGDPR_data
Managing the ontological gap between regulatory understandings and compliance footprints

Once lexical ambiguities set apart, the question is not so much about the data bases of well identified records than about the flows of data continuously processed: if identities and ownership are usually set upfront by business processes, attributions may have to be credited to enterprises know-how if and when carried out through data analytics.

Given that the distinctions are neither uniform, exclusive or final, ontologies will be needed to keep tabs on moves and motives. OWL 2 constructs (cf annex) could also help, first to map GDPR categories to relevant information managed by systems, second to sort out natural data from nurtured knowledge.

Activities & Purposes

Given footprints of personal data, the objective is to ensure the transparency and traceability of the processing activities subject to compliance.

Setting apart (see below for events) specific add-ons for notification and personal accesses,  charting compliance footprints is to be a complex endeavor: as there is no reason to assume some innate alignment of intended (regulation) and actual (enterprise) definitions, deciding where and when compliance should apply potentially calls for a review of all processing activities.

After taking into account the nature of activities, their lawfulness is to be determined by contexts (‘purpose limitation’ and ‘data minimization’) and time-frames (‘accuracy’ and ‘storage limitation’). And since lawfulness is meant to be transitive, a comprehensive map of the GDPR footprint is to rely on the logical traceability and transparency of the whole information systems, independently of GDPR.

That is arguably a challenging long-term endeavor, all the more so given that some kind of Chinese Wall has to be maintained around enterprise strategies, know-how, and operations. It ensues that an ontological level of indirection is again necessary between regulatory intents and effective processing activities.

Along that reasoning compliance categories, defined on their own, are first mapped to categories of functionalities (e.g authorization) or models (e.g use cases).

CakeGDPR_activ1
Compliance categories are associated upfront to categories of functionalities (e.g authorization) or models (e.g use cases).

Then, actual activities (e.g “rateCustomerCredit”) can be progressively brought into the compliance fold, either with direct associations with regulations or indirectly through associated models (e.g “ucRateCustomerCredit” use case).

CakeGDPR_activ2
Compliance as carried out through Use Case

The compliance backbone can be fleshed out using OWL 2 mechanisms (see annex) in order to:

  • Clarify the logical or functional dependencies between processing activities subject to compliance.
  • Qualify their lawfulness.
  • Draw equivalence, logical, or functional links between compliance alternatives.

That is to deal with the functional compliance of processing activities; but the most far-reaching impact of the regulation may come from the way time and events are taken into account.

Time & Events

As noted above, time is what makes the difference between data and information, and setting rules for notification makes that difference lawful. Moreover, by adding time constraints to the notifications of changes in personal data, regulators put systems’ internal events on the same standing as external ones. That apparently incidental departure echoes the immersion of systems into digitized business environments, making all time-scales equal whatever their nature. Such flattening is to induce crucial consequences for enterprise architectures.

That shift together with the regulatory intent are best taken into account by modeling events as changes in expectations, physical objects, processes execution, and symbolic objects, with personal data change belonging to the latter.

Gdpr events
Mapping internal (symbolic) and external (actual) events is a critical element of GDPR compliance

Putting apart events specific to GDPR (e.g data breaches), compliance with regard to accuracy and storage limitation regulations will require that all events affecting personal data:

  • Are set in time-frames, possibly overlapping.
  • Have notification constraints properly documented.
  • Have likelihood and costs of potential risks assessed.

As with data and activities, OWL 2 constructs are to be used to qualify compliance requirements.

Actors & Organization

GDPR introduces two specific categories of actors (aka roles): one (data subject) for natural persons, and one for actors set by organizations, either specifically for GDPR assignment, or by delegation to already defined actors.

Gdpr actors
GDPR roles can be set specifically or delegated

OWL 2 can then be used to detail how regulatory roles can be delegated to existing ones, enabling a smooth transition and a dynamic adjustment of enterprise organization with regulatory compliance.

It must be stressed that the semantic distinction between identified agents (e.g natural persons) and the roles (aka UML actors) they play in processes is of particular importance for GDPR compliance because who (or even what) is behind an actor interacting with a system is to remain unknown to the system until the actor can be authentically identified. If that ontological lapse is overlooked there is no way to define and deal with security, confidentiality or privacy regulations.

Conclusion

The use of ontologies brings clear benefits for regulators, enterprise governance, and systems architects.

Without shared conceptual guidelines chances are for the European regulatory orchestra to get lost in squabbles about minutiae before sliding into cacophony.

With regard to governance, bringing systems and regulations into a common conceptual framework is to enable clear and consistent compliance strategies and policies, as well as smooth learning curves.

With regard to architects, ontology-based compliance is to bring cross benefits and externalities, e.g from improved traceability and transparency of systems and applications.

Annex A: Mapping Regulations to Models (sample)

To begin with, OWL 2 can be used to map GDPR categories to relevant resources as managed by information systems:

  • Equivalence: GDPR and enterprise definitions coincide.
  • Logical intersection, union, complement: GDPR categories defined by, respectively, a cross, merge, or difference of enterprise definitions.
  • Qualified association between GDPR and enterprise categories.

Assuming the categories properly identified, the language can then be employed to define the sets of regulated instances:

  • Logical property restrictions, using existential and universal quantification.
  • Functional property restrictions, using joints on attributes values.

Other constructs, e.g cardinality or enumerations, could also be used for specific regulatory constraints.

Finally, some OWL 2 built-in mechanisms can significantly improve the assessment of alternative compliance policies by expounding regulations with regard to:

  • Equivalence, overlap, or complementarity.
  • Symmetry or asymmetry.
  • Transitivity
  • etc.

Annex B: Mapping Regulations to Capabilities

GDPR can be mapped to systems capabilities using well established Zachman’s taxonomy set by crossing architectures functionalities (Who,What,How, Where, When) and layers (business and organization), systems (logical structures and functionalities), and platforms (technologies).

Rules_GDPR
Regulatory Compliance vs Architectures Capabilities

These layers can be extended as to apply uniformly across external ontologies, from well-defined (e.g regulations) to fuzzy (e.g business prospects or new technologies) ones, e.g:

Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).

Such mapping is to significantly enhance the transparency of regulatory policies.

Further Reading

External Links