The whole of enterprises’ endeavors and behaviors cannot be coerced into models lest they inhibit their ability to navigate ill defined and shifting business environments. Enterprises immersion in digital environments is making limits all the more explicit:
On the environment side, facts, once like manna from heaven ready to be picked and interpreted, have turned into data floods swamping all recognizable models imprints
On the symbolic side, concepts, once steadily supported by explicit models and logic, are now emerging like new species from the Big Data primordial soup.
Typically, business analysts are taking the lead on both fronts toting learning machines and waving knowledge graphs. In between system architects have to deal with a two-pronged encroachment on information models.
On the one hand they have to build a Chinese wall between private data and managed information to comply with regulations
On the other hand they have to feed decision-making processes with accurate and up-to-date observations, and adjust information systems with relevant and actionable concepts.
That brings a new light on the so-called conceptual, logical, and physical “data” models as key components of enterprise architecture:
Physical data models are meant to be directly lined up with operations and digital environments
Logical models represent the categories managed by information systems and must be up to par with systems functional architecture
Conceptual models are meant to represent enterprise knowledge of business domains and objectives, as well as its embodiment in organisation and people.
Logical models (information) appear therefore as an architecture hub linking business facts (data) and concepts (knowledge), ensuring exchanges between environments and representations e.g.:
Reasoning Patterns
Deduction: matching observations (data) with models to produce new information, i.e. data with structure and semantics
Induction: making hypothesises (knowledge) about the scope of models in order to make deductions
Distinctions must serve a purpose and be assessed accordingly. On that account, what would be the point of setting apart data and information, and on what basis could that be done.
From Data Stripes to Information Structure (Victor Vasarely)
Until recently the two terms seem to have been used indifferently; until, that is, the digital revolution. But the generalization of digital surroundings and the tumbling down of traditional barriers surrounding enterprises have upturned the playground as well as the rules of the game.
Previously, with data analytics, information modeling, and knowledge management mostly carried out as separate threads, there wasn’t much concerns about semantic overlaps; no more. Lest they fall behind, enterprises have to combine observation (data), reasoning (information), and judgment (knowledge) as a continuous process. But such integration implies in return more transparency and traceability with regard to resources (e.g external or internal) and objectives (e.g operational or strategic); that’s when a distinction between data and information becomes necessary.
Economics: Resources vs Assets
Understood as a whole or separately, there is little doubt that data and information have become a key success factor, calling for more selective and effective management schemes.
Being immersed in digital environments, enterprises first depend on accurate, reliable, and timely observations of their business surroundings. But in the new digital world the flows of data are so massive and so transient that mining meaningful and reliable pieces is by itself a decisive success factor. Next, assuming data flows duly processed, part of the outcome has to be consolidated into models, to be managed on a persistent basis (e.g customer records or banking transactions), the rest being put on temporary shelves for customary uses, or immediately thrown away (e.g personal data subject to privacy regulations). Such a transition constitutes a pivotal inflexion point for systems architectures and governance as it sorts out data resources with limited lifespan from information assets with strategic relevance. Not to mention the sensibility of regulatory compliance to data management.
Processes: Operations vs Intelligence
Making sense of data is pointless without putting the resulting information to use, which in digital environments implies a tight integration of data and information processing. Yet, as already noted, tighter integration of processes calls for greater traceability and transparency, in particular with regard to the origin and scope: external (enterprise business and organization) or internal (systems). The purposes of data and information processing can be squared accordingly:
The top left corner is where business models and strategies are meant to be defined.
The top right corner corresponds to traditional data or information models derived from business objectives, organization, and requirement analysis.
The bottom line correspond to analytic models for business (left) and operations (right).
Squaring the purposes of Data & Information Processing
That view illustrates the shift of paradigm induced by the digital transformation. Prior, most mappings would be set along straight lines:
Horizontally (same nature), e.g requirement analysis (a) or configuration management (b). With source and destination at the same level, the terms employed (data or information) have no practical consequence.
Vertically (same scope), e.g systems logical to physical models (c) or business intelligence (d). With source and destination set in the same semantic context the distinction (data or information) can be ignored.
The digital transformation makes room for diagonal transitions set across heterogeneous targets, e.g mapping data analytics with conceptual or logical models (e).
That double mix of levels and scopes constitutes the nexus of decision-making processes; their transparency is contingent on a conceptual distinction between data and information.
Data is used to align operations (systems) with observations (territories).
Information is used to align categories (maps) with objectives.
Roles of Data (red) & Information (blue) in integrated decision-making processes
Then, the conceptual distinction between data and information is instrumental for the integration of operational and strategic decision-making processes:
Data analytics feeding business intelligence
Information modeling supporting operational assessment.
Not by chance, these distinctions can be aligned with architecture layers.
Architectures: Instances vs Categories
Blending data with information overlooks a difference of nature, the former being associated with actual instances (external observation or systems operations), the latter with symbolic descriptions (categories or types). That intrinsic difference can be aligned with architecture layers (resources are consumed, assets are managed), and decision-making processes (operations deal with instances, strategies with categories).
With regard to architectures, the relationship between instances (data) and categories (information) can be neatly aligned with capability layers, as represented by the Pagoda blueprint:
The platform layer deals with data reflecting observations (external facts) and actions (system operations).
The functional layer deals with information, i.e the symbolic representation of business and organization categories.
The business and organization layer defines the business and organization categories.
It must also be noted that setting apart what pertains to individual data independently of the information managed by systems clearly props up compliance with privacy regulations.
Architectures & Decision-making
With regard to decision-making processes, business intelligence uses the distinction to integrate levels, from operations to strategic planning, the former dealing with observations and operations (data), the latter with concepts and categories (information and knowledge).
Representations: Knowledge Architecture
As noted above, the distinction between data and information is a necessary counterpart of the integration of operational and intelligence processes; that implies in return to bring data, information, and knowledge under a common conceptual roof, respectively as resources, assets, and service:
Resources: data is captured through continuous and heterogeneous flows from a wide range of sources.
Assets: information is built by adding identity, structure, and semantics to data.
Services: knowledge is information put to use through decision-making.
Ontologies, which are meant to encompass all and every kind of knowledge, are ideally suited for the management of whatever pertains to enterprise architecture, thesaurus, models, heuristics, etc.
That approach has been tested with the Caminao ontological kernel using OWL2; a beta version is available for comments on the Stanford/Protégé portal with the link: Caminao Ontological Kernel (CaKe 2.0).
Conclusion: From Metadata to Machine Learning
The significance of the distinction between data and information shows up at the two ends of the spectrum:
On one hand, it straightens the meaning of metadata, to be understood as attributes of observations independently of semantics, a dimension that plays a critical role in machine learning.
On the other hand, enshrining the distinction between what can be known of individuals facts or phenomena and what can be abstracted into categories is to enable an open and dynamic knowledge management, also a critical requisite for machine learning.
Turning thoughts into figures faces the intrinsic constraint of dimension: two dimensional representations cannot cope with complexity.
Making his mind about knowledge dimensions: actual world, descriptions, and reproductions (Jan van der Straet)
So, lest they be limited to flat and shallow thinking, mind cartographers have to introduce the cognitive equivalent of geographical layers (nature, demography, communications, economy,…), and archetypes (mountains, rivers, cities, monuments, …)
Nodes: What’s The Map About
Nodes in maps (aka roots, handles, …) are meant to anchor thinking threads. Given that human thinking is based on the processing of symbolic representations, mind mapping is expected to progress wide and deep into the nature of nodes: concepts, topics, actual objects and phenomena, artifacts, partitions, or just terms.
What’s The Map About
It must be noted that these archetypes are introduced to characterize symbolic representations independently of domain semantics.
Connectors: Cognitive Primitives
Nodes in maps can then be connected as children or siblings, the implicit distinction being some kind of refinement for the former, some kind of equivalence for the latter. While such a semantic latitude is clearly a key factor of creativity, it is also behind the poor scaling of maps with complexity.
A way to frame complexity without thwarting creativity would be to define connectors with regard to cognitive primitives, independently of nodes’ semantics:
References connect nodes as terms.
Associations: connect nodes with regard to their structural, functional, or temporal proximity.
Analogies: connect nodes with regard to their structural or functional similarities.
At first, with shallow nodes defined as terms, connections can remain generic; then, with deeper semantic levels introduced, connectors could be refined accordingly for concepts, documentation, actual objects and phenomena, artifacts,…
Connectors are aligned with basic cognitive mechanisms of metonymy (associations) and analogy (similarities)
Semantics: Extensional vs Intensional
Given mapping primitives defined independently of domains semantics, the next step is to take into account mapping purposes:
Extensional semantics deal with categories of actual instances of objects or phenomena.
Intensional semantics deal with specifications of objects or phenomena.
That distinction can be applied to basic semantic archetypes (people, roles, events, …) and used to distinguish actual contexts, symbolic representations, and specifications, e.g:
Extensions (full border) are about categories of instances, intensions (dashed border) are about specifications
Car (object) refers to context, not to be confused with Car (surrogate) which specified the symbolic counterpart: the former is extensional (actual instances), the latter intensional (symbolic representations)
Maintenance Process is extensional (identified phenomena), Operation is intensional (specifications).
Reservation and Driver are symbolic representations (intensional), Person is extensional (identified instances).
It must be reminded that whereas the choice is discretionary and contingent on semantic contexts and modeling purposes (‘as-it-is’ vs ‘as-it-should-be’), consequences are not because the choice is to determine abstraction semantics.
For example, the records for cars, drivers, and reservations are deemed intensional because they are defined by business concerns. Alternatively, instances of persons and companies are defined by contexts and therefore dealt with as extensional descriptions.
Abstractions: Subsets & Sub-types
Thinking can be characterized as a balancing act between making distinctions and managing the ensuing complexity. To that end, human edge over other animal species is the use of symbolic representations for specialization and generalization.
That critical mechanism of human thinking is often overlooked by mind maps due to a confused understanding of inheritance semantics:
Strong inheritance deals with instances: specialization define subsets and generalization is defined by shared structures and identities.
Weak inheritance deals with specifications: specialization define sub-types and generalization is defined by shared features.
Inheritance semantics: shared structures (dark) vs shared features (white)
The combination of nodes (intension/extension) and inheritance (structures/features) semantics gives cartographers two hands: a free one for creative distinctions, and a safe one for the ensuing complexity. e.g:
Intension and weak inheritance: environments (extension) are partitioned according to regulatory constraints (intension); specialization deals with subtypes and generalization is defined by shared features.
Extension and strong inheritance: cars (extension) are grouped according to motorization; specialization deals with subsets and generalization is defined by shared structures and identities.
Intension and strong inheritance: corporate sub-type inherits the identification features of type Reservation (intension).
Mind maps built on these principles could provide a common thesaurus encompassing the whole of enterprise data, information and knowledge.
Intelligence: Data, Information, Knowledge
Considering that mind maps combine intelligence and cartography, they may have some use for enterprise architects, in particular with regard to economic intelligence, i.e the integration of information processing, from data mining to knowledge management and decision-making:
Data provide the raw input, without clear structures or semantics (terms or aspects).
Categories are used to process data into information on one hand (extensional nodes), design production systems on the other hand (intensional nodes).
Abstractions (concepts) makes knowledge from information by putting it to use.
Conclusion
Along that perspective mind maps could serve as front-ends for enterprise architecture ontologies, offering a layered cartography that could be organized according to concerns:
Enterprise architects would look at physical environments, business processes, and functional and technical systems architectures.
Using layered maps to visualize enterprise architectures
Knowledge managers would take a different perspective and organize the maps according to the nature and source of data, information, and knowledge.intelligence w
Using layered maps to build economic intelligence
As demonstrated by geographic information systems, maps built on clear semantics can be combined to serve a wide range of purposes; furthering the analogy with geolocation assistants, layered mind maps could be annotated with punctuation marks (e.g ?, !, …) in order to support problem-solving and decision-making.
Given the digitization of enterprises environments, engineering processes have to be entwined with business ones while kept in sync with enterprise architectures. That calls for new threads of collaboration taking into account the integration of business and engineering processes as well as the extension to business environments.
Collaboration can be personal and direct, or collective and mediated (Wang Qingsong)
Whereas models are meant to support communication, traditional approaches are already straining when used beyond software generation, that is collaboration between humans and CASE tools. Ontologies, which can be seen as a higher form of models, could enable a qualitative leap for systems collaborative engineering at enterprise level.
Systems Engineering: Contexts & Concerns
To begin with contents, collaborations should be defined along three axes:
Requirements: business objectives, enterprise organization, and processes, with regard to systems functionalities.
Feasibility: business requirements with regard to architectures capabilities.
Architectures: supporting functionalities with regard to architecture capabilities.
Engineering Collaborations at Enterprise Level
Since these axes are usually governed by different organizational structures and set along different time-frames, collaborations must be supported by documentation, especially models.
Shared Models
In order to support collaborations across organizational units and time-frames, models have to bring together perspectives which are by nature orthogonal:
Contexts, concerns, and languages: business vs engineering.
Time-frames and life-cycle: business opportunities vs architecture stability.
Harnessing MBSE to EA
That could be achieved if engineering models could be harnessed to enterprise ones for contexts and concerns. That is to be achieved through the integration of processes.
Processes Integration
As already noted, the integration of business and engineering processes is becoming a key success factor.
For that purpose collaborations would have to take into account the different time-frames governing changes in business processes (driven by business value) and engineering ones (governed by assets life-cycles):
Business requirements engineering is synchronic: changes must be kept in line with architectures capabilities (full line).
Software engineering is diachronic: developments can be carried out along their own time-frame (dashed line).
Synchronic (full) vs diachronic (dashed) processes.
Application-driven projects usually focus on users’ value and just-in-time delivery; that can be best achieved with personal collaboration within teams. Architecture-driven projects usually affect assets and non-functional features and therefore collaboration between organizational units.
Collaboration: Direct or Mediated
Collaboration can be achieved directly or through some mediation, the former being a default option for applications, the latter a necessary one for architectures.
Both can be defined according to basic cognitive and organizational mechanisms and supported by a mix of physical and virtual spaces to be dynamically redefined depending on activities, projects, locations, and organisation.
Direct collaborations are carried out between individuals with or without documentation:
Immediate and personal: direct collaboration between 5 to 15 participants with shared objectives and responsibilities. That would correspond to agile project teams (a).
Delayed and personal: direct collaboration across teams with shared knowledge but with different objectives and responsibilities. That would tally with social networks circles (c).
Collaborations
Mediated collaborations are carried out between organizational units through unspecified individual members, hence the need of documentation, models or otherwise:
Direct and Code generation from platform or domain specific models (b).
Model transformation across architecture layers and business domains (d)
Depending on scope and mediation, three basic types of collaboration can be defined for applications, architecture, and business intelligence projects.
Projects & Collaborations
As it happens, collaboration archetypes can be associated with these profiles.
Collaboration Mechanisms
Agile development model (under various guises) is the option of choice whenever shared ownership and continuous delivery are possible. Application projects can so be carried out autonomously, with collaborations circumscribed to team members and relying on the backlog mechanism.
Projects set across enterprise architectures cannot be carried out without taking into account phasing constraints. While ill-fated Waterfall methods have demonstrated the pitfalls of procedural solutions, phasing constraints can be dealt with a roundabout mechanism combining iterative and declarative schemes.
Engineering vs Business Driven Collaborations
With collaborative engineering upgraded at enterprise level, the main challenge is to iron out frictions between application and architecture projects and ensure the continuity, consistency and effectiveness of enterprise activities. That can be achieved with roundabouts used as a collaboration mechanism between projects, whatever their nature:
Shared models are managed at roundabout level.
Phasing dependencies are set in terms of assertions on shared models.
Depending on constraints projects are carried out directly (1,3) or enter roundabouts (2), with exits conditioned by the availability of models.
Engineering driven collaboration: roundabout and backlogs
Moreover, with engineering embedded in business processes, collaborations must also bring together operational analytics, decision-making, and business intelligence. Here again, shared models are to play a critical role:
Enterprise descriptive and prescriptive models for information maps and objectives
Environment predictive models for data and business understanding.
Business driven collaboration: operations and business intelligence
Whereas both engineering and business driven collaborations depend on sharing information and knowledge, the latter have to deal with open and heterogeneous semantics. As a consequence, collaborations must be supported by shared representations and proficient communication languages.
Ontologies & Representations
Ontologies are best understood as models’ backbones, to be fleshed out or detailed according to context and objectives, e.g:
Thesaurus, with a focus on terms and documents.
Systems modeling, with a focus on integration, e.g Zachman Framework.
Classifications, with a focus on range, e.g Dewey Decimal System.
Meta-models, with a focus on model based engineering, e.g models transformation.
Conceptual models, with a focus on understanding, e.g legislation.
Knowledge management, with a focus on reasoning, e.g semantic web.
As such they can provide the pillars supporting the representation of the whole range of enterprise concerns:
Taking a leaf from Zachman’s matrix, ontologies can also be used to differentiate concerns with regard to architecture layers: enterprise, systems, platforms.
Last but not least, ontologies can be profiled with regard to the nature of external contexts, e.g:
Institutional: Regulatory authority, steady, changes subject to established procedures.
Professional: Agreed upon between parties, steady, changes subject to established procedures.
Corporate: Defined by enterprises, changes subject to internal decision-making.
Social: Defined by usage, volatile, continuous and informal changes.
Personal: Customary, defined by named individuals (e.g research paper).
Cross profiles: capabilities, enterprise architectures, and contexts.
Ontologies & Communication
If collaborations have to cover engineering as well as business descriptions, communication channels and interfaces will have to combine the homogeneous and well-defined syntax and semantics of the former with the heterogeneous and ambiguous ones of the latter.
With ontologies represented as RDF (Resource Description Framework) graphs, the first step would be to sort out truth-preserving syntax (applied independently of domains) from domain specific semantics.
RDF graphs (top) support formal (bottom left) and domain specific (bottom right) semantics.
On that basis it would be possible to separate representation syntax from contents semantics, and to design communication channels and interfaces accordingly.
That would greatly facilitate collaborations across externally defined ontologies as well as their mapping to enterprise architecture models.
Conclusion
To summarize, the benefits of ontological frames for collaborative engineering can be articulated around four points:
A clear-cut distinction between representation semantics and truth-preserving syntax.
A common functional architecture for all users interfaces, humans or otherwise.
Modular functionalities for specific semantics on one hand, generic truth-preserving and cognitive operations on the other hand.
Profiled ontologies according to concerns and contexts.
A critical fifth benefit could be added with regard to business intelligence: combined with deep learning capabilities, ontologies would extend the scope of collaboration to explicit as well as implicit knowledge, the former already framed by languages, the latter still open to interpretation and discovery.
P.S.
Knowledge graphs, which have become a key component of knowlege management, are best understood as a reincarnation of ontologies.
Given the number and verbosity of alternative definitions pertaining to enterprise and systems architectures, common sense would suggest circumspection if not agnosticism. Instead, fierce wars are endlessly waged for semantic positions built on sand hills bound to crumble under the feet of whoever tries to stand defending them.
Such doomed attempts appear to be driven by a delusion seeing concepts as frozen celestial bodies; fortunately, simple-minded catalogs of unyielding definitions are progressively pushed aside by the will to understand (and milk) the new complexity of business environments.
Business Intelligence: Mapping Semantics to Circumstances
As long as information systems could be kept behind Chinese walls semantic autarky was of limited consequences. But with enterprises’ gates collapsing under digital flows, competitive edges increasingly depend on open and creative business intelligence (BI), in particular:
Data understanding: giving form and semantics to massive and continuous inflows of raw observations.
Business understanding: aligning data understanding with business objectives and processes.
Modeling: consolidating data and business understandings into descriptive, predictive, or operational models.
Evaluation: assessing and improving accuracy and effectiveness of understandings with regard to business and decision-making processes.
BI: Mapping Semantics to Circumstances
Since BI has to take into account the continuity of enterprise’s objectives and assets, the challenge is to dynamically adjust the semantics of external (business environments) and internal (objects and processes) descriptions. That could be explained in terms of gravitational semantics.
Semantic Galaxies
Assuming concepts are understood as stars wheeling across unbounded and expanding galaxies, semantics could be defined by gravitational forces and proximity between:
Intensional concepts (stars) bearing necessary meaning set independently of context or purpose.
Extensional concepts (planets) orbiting intensional ones. While their semantics is aligned with a single intensional concept, they bear enough of their gravity to create a semantic environment.
On that account semantic domains would be associated to stars and their planets, with galaxies regrouping stars (concepts) and systems (domains) bound by gravitational forces (semantics).
Semantic Dimensions & the Morphing of Concepts
While systems don’t leave much, if any, room for semantic wanderings, human languages are as good as they can be pliant, plastic, and versatile. Hence the need for business intelligence to span the stretch between open and fuzzy human semantics and systems straight-jacketed modeling languages.
That can be done by framing the morphing of concepts along Zachman’s architecture description: intensional concepts being detached of specific contexts and concerns are best understood as semantic roots able to breed multi-faceted extensions, to be eventually coerced into system specifications.
Framing concepts metamorphosis along Zachman’s architecture dimensions
Managing The Alignment of Planets
As stars, concepts can be apprehended through a mix of reason and perception:
Figured out from a conceptual void waiting to be filled.
Fortuitously discovered in the course of an argument.
The benefit in both cases would be to delay verbal definitions and so to avoid preempted or biased understandings: as for the Schrödinger’s cat, trying to lock up meanings with bare words often breaks their semantic integrity, shattering scraps in every direction.
In contrast, semantic orbits and alignments open the door to the dynamic management of overlapping definitions across conceptual galaxies. As it happens, that new paradigm could be a game changer for enterprise architecture and knowledge management.
From Business To Economic Intelligence
Traditional approaches to information systems and knowledge management are made obsolete by the combination of digital environments, big data, and the spreading of artificial brains with deep learning abilities.
To cope with these changes enterprises have to better integrate business intelligence with information systems, and that cannot be achieved without redefining the semantic and functional dimensions of enterprise architectures.
Semantic dimensions deal with contexts and domains of concerns: statutory, business, organization, engineering. Since these dimensions are by nature different, their alignment has to be managed; that can be achieved with conceptual galaxies and profiled ontologies.
Functional dimensions are to form the backbone of enterprise architectures, their primary purpose being the integration of data analytics, information processing, and decision-making. That approach, often labelled as economic intelligence, defines data, information, and knowledge, respectively as resources, assets, and service:
Resources: data is captured through continuous and heterogeneous flows from a wide range of sources.
Assets: information is built by adding identity, structure, and semantics to data.
Services: knowledge is information put to use through decision-making.
Economic Intelligence: Bringing data mining, information processing, and knowledge management into a single conceptual framework
An ontological kernel has been developed as a Proof of Concept using Protégé/OWL 2.
Examples
Data
Wikipedia: Any sequence of one or more symbols given meaning by specific act(s) of interpretation; requires interpretation to become information.
Merriam-Webster: Factual information such as measurements or statistics; information in digital form that can be transmitted or processed; information and noise from a sensing device or organ that must be processed to be meaningful.
Cambridge Dictionary: Information, especially facts or numbers; information in an electronic form that can be stored and used by a computer.
Collins: Information that can be stored and used by a computer program.
TOGAF: Basic unit of information having a meaning and that may have subcategories (data items) of distinct units and values.
System
Wikipedia: A regularly interacting or interdependent group of items forming a unified whole; Every system is delineated by its spatial and temporal boundaries, surrounded and influenced by its environment, described by its structure and purpose and expressed in its functioning.
Merriam-Webster: A regularly interacting or interdependent group of items forming a unified whole
Business Dictionary: A set of detailed methods, procedures and routines created to carry out a specific activity, perform a duty, or solve a problem; organized, purposeful structure that consists of interrelated and interdependent elements.
Cambridge Dictionary: A set of connected things or devices that operate together
Collins Dictionary: A way of working, organizing, or doing something which follows a fixed plan or set of rules; a set of things / rules.
TOGAF: A collection of components organized to accomplish a specific function or set of functions (from ISO/IEC 42010:2007).
An often overlooked benefit of artificial intelligence has been a renewed interest in seminal philosophical and cognitive topics; ontologies coming top of the list.
The Thinker Monkey, Breviary of Mary of Savoy
Yet that interest has often been led astray by misguided perspectives, in particular:
Universality: one-fits-all approaches are pointless if not self-defeating considering that ontologies are meant to target specific domains of concerns.
Implementation: the focus is usually put on representation schemes (commonly known as Resource Description Frameworks, or RDFs), instead of the nature of targeted knowledge and the associated cognitive capabilities.
Those misconceptions, often combined, may explain the limited practical inroads of ontologies. Conversely, they also point to ontologies’ wherewithal for enterprises immersed into boundless and fluctuating knowledge-driven business environments.
Ontologies as Assets
Whatever the name of the matter (data, information or knowledge), there isn’t much argument about its primacy for business competitiveness; insofar as enterprises are concerned knowledge is recognized as a key asset, as valuable if not more than financial ones, and should be managed accordingly. Pushing the comparison still further, data would be likened to liquidity, information to fixed income investment, and knowledge to capital ventures. To summarize, assets whatever their nature lose value when left asleep and bear fruits when kept awake; that’s doubly the case for data and information:
Digitized business flows accelerates data obsolescence and makes it continuous.
Shifting and porous enterprises boundaries and markets segments call for constant updates and adjustments of enterprise information models.
But assessing the business value of knowledge has always been a matter of intuition rather than accounting, even when it can be patented; and most of knowledge shapes up well beyond regulatory reach. Nonetheless, knowledge is not manna from heaven but the outcome of information processing, so assessing the capabilities of such processes could help.
Admittedly, traditional modeling methods are too stringent for that purpose, and looser schemes are needed to accommodate the open range of business contexts and concerns; as already expounded, that’s precisely what ontologies are meant to do, e.g:
Systems modeling, with a focus on integration, e.g Zachman Framework.
Classifications, with a focus on range, e.g Dewey Decimal System.
Conceptual models, with a focus on understanding, e.g legislation.
Knowledge management, with a focus on reasoning, e.g semantic web.
And ontologies can do more than bringing under a single roof the whole of enterprise knowledge representations: they can also be used to nurture and crossbreed symbolic assets and develop innovative ones.
Ontologies Benefits
Knowledge is best understood as information put to use; accounting rules may be disputed but there is no argument about the benefits of a canny combination of information, circumstances, and purpose. Nonetheless, assessing knowledge returns is hampered by the lack of traceability: if a part of knowledge is explicit and subject to symbolic representation, another is implicit and manifests itself only through actual behaviors. At philosophical level it’s the line drawn by Wittgenstein: “The limits of my language mean the limits of my world”; at technical level it’s AI’s two-lanes approach: symbolic rule-based engines vs non symbolic neural networks; at corporate level implicit knowledge is seen as some unaccounted for aspect of intangible assets when not simply blended into corporate culture. With knowledge becoming a primary success factor, a more reasoned approach of its processing is clearly needed.
To begin with, symbolic knowledge can be plied by logic, which, quoting Wittgenstein again, “takes care of itself; all we have to do is to look and see how it does it.” That would be true on two conditions:
Domains are to be well circumscribed.
A water-tight partition must be secured between the logic of representations and the semantics of domains.
That could be achieved with modular and specific ontologies built on a clear distinction between common representation syntax and specific domains semantics.
As for non-symbolic knowledge, its processing has for long been overshadowed by the preeminence of symbolic rule-based schemes, that is until neural networks got the edge and deep learning overturned the playground. In a few years’ time practically unlimited access to raw data and the exponential growth in computing power have opened the door to massive sources of unexplored knowledge which is paradoxically both directly relevant yet devoid of immediate meaning:
Relevance: mined raw data is supposed to reflect the geology and dynamics of targeted markets.
Meaning: the main value of that knowledge rests on its implicit nature; applying existing semantics would add little to existing knowledge.
Assuming that deep learning can transmute raw base metals into knowledge gold, enterprises would need to understand, assess, and improve the refining machinery. That could be done with ontological frames.
A Proof of Concept
Compared to tangible assets knowledge may appear as very elusive, yet, and contrary to intangible ones, knowledge is best understood as the outcome of processes that can be properly designed, assessed, and improved. And that can be achieved with profiled ontologies.
As a Proof of Concept, an ontological kernel has been developed along two principles:
A clear-cut distinction between truth-preserving representation and domain specific semantics.
Profiled ontologies designed according to the nature of contents (concepts, documents, or artifacts), layers (environment, enterprise, systems, platforms), and contexts (institutional, professional, corporate, social.
That provides for a seamless integration of information processing, from data mining to knowledge management and decision making:
Data is first captured through aspects.
Categories are used to process data into information on one hand, design production systems on the other hand.
Concepts serve as bridges to knowledgeable information.
To be of any use for enterprises, ontologies have to embrace a wide range of contexts and concerns, often ill-defined for environments, rather well expounded for systems.
And now that enterprises have to compete in open, digitized, and networked environments, business and systems ontologies have to be combined into modular knowledge architectures.
Ontologies & Contexts
If open-endedbusiness contexts and concerns are to be taken into account, the first step should be to characterize ontologies with regard to their source, justification, and the stability of their categories, e.g:
Institutional: Regulatory authority, steady, changes subject to established procedures.
Professional: Agreed upon between parties, steady, changes subject to accords.
Corporate: Defined by enterprises, changes subject to internal decision-making.
Social: Defined by usage, volatile, continuous and informal changes.
Personal: Customary, defined by named individuals (e.g research paper).
Assuming such an external taxonomy, the next step would be to see what kind of internal (i.e enterprise architecture) ontologies can be fitted into, as it’s the case for the Zachman framework.
The Zachman’s taxonomy is built on well established concepts (Who,What,How, Where, When) applied across architecture layers for enterprise (business and organization), systems (logical structures and functionalities), and platforms (technologies). These layers can be generalized and applied uniformly across external contexts, from well-defined (e.g regulations) to fuzzy (e.g business prospects or new technologies) ones, e.g:
Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).
That “divide to conquer” strategy is to serve two purposes:
By bridging the gap between internal and external taxonomies it significantly enhances the transparency of governance and decision-making.
By applying the same motif (Who,What, How, Where, When) across the semantics of contexts, it opens the door to a seamless integration of all kinds of knowledge: enterprise, professional, institutional, scientific, etc.
As can be illustrated using Zachman concepts, the benefits are straightforward at enterprise architecture level (e.g procurement), due to the clarity of supporting ontologies; not so for external ones, which are by nature open and overlapping and often come with blurred semantics.
Ontologies & Concerns
A broad survey of RDF-based ontologies demonstrates how semantic overlaps and folds can be sort out using built-in differentiation between domains’ semantics on one hand, structure and processing of symbolic representations on the other hand. But such schemes are proprietary, and evidence shows their lines seldom tally, with dire consequences for interoperability: even without taking into account relationships and integrity constraints, weaving together ontologies from different sources is to be cumbersome, the costs substantial, and the outcome often reduced to a muddy maze of ambiguous semantics.
Knowledge graphs have tackled the difficulty by setting apart representation (e.g RDF) and contents semantics (aka ontologies), and their impressive performances across a wide range of domains bear witness of the soundness of the approach.
The governance-driven taxonomy introduced above deals with contexts and consequently with coarse-grained modularity. It should be complemented by a fine-grained one to be driven by concerns, more precisely by the epistemic nature of the individual instances to be denoted. As it happens, that could also tally with the Zachman’s taxonomy:
Thesaurus: ontologies covering terms and concepts.
Documents: ontologies covering documents with regard to topics.
Business: ontologies of relevant enterprise organization and business objects and activities.
Engineering: symbolic representation of organization and business objects and activities.
Ontologies: Purposes & Targets
Enterprises could then pick and combine templates according to domains of concern and governance. Taking an on-line insurance business for example, enterprise knowledge architecture would have to include:
Medical thesaurus and consolidated regulations (Knowledge).
Principles and resources associated to the web-platform (Engineering).
Description of products (e.g vehicles) and services (e.g insurance plans) from partners (Business).
Such designs of ontologies according to the governance of contexts and the nature of concerns would significantly reduce blanket overlaps and improve the modularity and transparency of ontologies.
On a broader perspective, that policy will help to align knowledge management with EA governance by setting apart ontologies defined externally (e.g regulations), from the ones set through decision-making, strategic (e.g plate-form) or tactical (e.g partnerships).
Open Ontologies’ Benefits
Benefits from open and formatted ontologies built along an explicit distinction between the semantics of representation (aka ontology syntax) and the semantics of context can be directly identified for:
Modularity: the knowledge basis of enterprise architectures could be continuously tailored to changes in markets and corporate structures without impairing enterprise performances.
Integration: the design of ontologies with regard to the nature of targets and stability of categories could enable built-in alignment mechanisms between knowledge architectures and contexts.
Interoperability: limited overlaps and finer granularity are to greatly reduce frictions when ontologies bearing out business processes are to be combined or extended.
Reliability: formatted ontologies can be compared to typed programming languages with regard to transparency, internal consistency, and external validity.
Last but not least, such reasoned design of ontologies may open new perspectives for the collaboration between cognitive humans and pretending ones.
Tests are meant to ensure that nothing will go amiss. Assuming that expected hazards can be duly dealt with beforehand, the challenge is to guard against unexpected ones.
Unexpected Outcome (Ariel Schlesinger)
That would require the scripting of every possible outcomes in an unlimited range of unknown circumstances, and that’s where Deep Learning may help.
What to Look For
As Donald Rumsfeld once famously said, there are things that we know we don’t know, and things we don’t know we don’t know; hence the need of setting things apart depending on what can be known and how, and build the scripts accordingly:
Business requirements: tests can be designed with respect to explicit specifications; yet some room should also be left for changes in business circumstances.
Functional requirements: assuming business requirements are satisfied, the part played by supporting systems can be comprehensively tested with respect to well-defined boundaries and operations.
Quality of service: assuming business and functional requirements are satisfied, tests will have to check how human interfaces and resources are to cope with users behaviors and expectations which, by nature, cannot be fully anticipated.
Technical requirements: assuming business and functional requirements are satisfied as well as users’ expectations for service, deployment, maintenance, and operations are to be tested with regard to feasibility and costs.
Automated testing has to take into account these differences between scope and nature, from bounded and defined specifications to boundless, fuzzy and changing circumstances.
Automated Software Testing
Automated software testing encompasses two basic components: first the design of test cases (events, operations, and circumstances), then their scripted execution. Leading frameworks already integrate most of the latter together with the parts of the former targeting technical aspects like graphical user interfaces or system APIs. Artificial intelligence (AI) and machine learning (ML) have also been tried for automated test generation, yet with a scope limited by dependency on explicit knowledge, and consequently by the need of some “manual” teaching. That hurdle may be overcame by the deep learning ability to get direct (aka automated) access to implicit knowledge.
Reconnaissance: Known Knowns
Systems are designed artifacts, with the corollary that their components are fully defined and their behavior predictable. The design of technical test cases can therefore be derived from what is known of software and systems architectures, the former for test units, the latter for integration and acceptance tests. Deep learning could then mine recorded log-files in order to identify critical cases’ events and circumstances.
Exploration: Known Unknowns
Assuming that applications must be tested for use during their expected shelf life, some uncertainty has to be factored in for future business circumstances. Yet, assuming applications are designed to meet specific business objectives, such hypothetical circumstances should remain within known boundaries. In that context deep learning could be applied to exploration as well as policies:
Compared to technical test cases that can rely on the content of systems log-files, business and functional ones have to look outside and mine raw data from business environments.
In return, the relevancy of observations can be assessed with regard to business objectives, improved, and feed the policy module in charge of defining test cases.
Blind Errands: Unknown Unknowns
Even with functional and technical capabilities well-tested and secured, quality of service may remain contingent on human quirks: instinctive or erratic behaviors that could thwart the best designed handrails. On one hand, and due to their very nature, such hazards are not to be easily forestalled by reasoned test cases; but on the other hand they don’t take place in a void but within known functional circumstances. Given that porosity of functional and cognitive layers, the validity of functional test cases may be compromised by unfathomable cognitive associations, and that could open the door to unmanageable regression. Enter deep learning and its ability to extract knowledge from insignificance.
Compared to business and functional test cases, hazards are not directly related to business activities. As a consequence, the learning process cannot be guided by business and functional test cases but has to chart unpredictable human behaviors. As it happens, that kind of learning combining random simulation with automated reinforcement is what makes the specificity of deep learning.
From Non-regression to Self-improvement
As a conclusion, if non-regression is to be the cornerstone of quality management, test cases are to be set along clear swim-lanes: business logic (independently of systems), supporting systems functionalities (for shared applications), users interfaces (for non shared interactions). Then, since test cases are also run across swim-lanes, it opens the door to feedback, e.g unit test cases reassessed directly from business rules independently of systems functionalities, or functional test cases reassessed from users’ behaviors.
Considering that well-defined objectives, sound feedback mechanisms, and the availability of massive data from systems logs (internal) and business environment (external) are the main pillars of deep learning technologies, their combination in integrated frameworks could result in a qualitative leap toward self-improving automated test cases.
The objective of the National Information Exchange Model (NIEM) is to provide a “dictionary of agreed-upon terms, definitions, relationships, and formats that are independent of how information is stored in individual systems.”
NIEM’s model makes no difference between data and information (Alfred Jensen)
For that purpose NIEM’s model combines commonly agreed core elements with community-specific ones. Weighted against the benefits of simplicity, this architecture overlooks critical distinctions:
Inputs: Data vs Information
Dictionary: Lexicon and Thesaurus
Meanings: Lexical Items and Semantics
Usage: Roots and Aspects
That shallow understanding of information significantly hinders the exchange of information between business or institutional entities across overlapping domains.
Inputs: Data vs Information
Data is made of unprocessed observations, information makes sense of data, and knowledge makes use of information. Given that NIEM is meant to be an exchange between business or institutional users, it should have no concern with data mining or knowledge management.
As an exchange, NIEM should have no concern with data mining or knowledge management.
The problem is that, as conveyed by “core of data elements that are commonly understood and defined across domains, such as person, activity, document, location”, NIEM’s model makes no explicit distinction between data and information.
As a corollary, it implies that data may not only be meaningful, but universally so, which leads to a critical trap: as substantiated by data analytics, data is not supposed to mean anything before processed into information; to keep with examples, even if the definition of persons and locations may not be specific, the semantics of associated information is nonetheless set by domains, institutional, regulatory, contractual, or otherwise.
Data is meaningless, information meaning is set by semantic domains.
Not surprisingly, that medley of data and information is mirrored by NIEM’s dictionary.
Dictionary: Lexicon & Thesaurus
As far as languages are concerned, words (e.g “word”, “ξ∏¥” ,”01100″) remain data items until associated to some meaning. For that reason dictionaries are built on different levels, first among them lexical and semantic ones:
Lexicons take items on their words and gives each of them a self-contained meaning.
Thesauruses position meanings within overlapping galaxies of understandings held together by the semantic equivalent of gravitational forces; the meaning of words can then be weighted by the combined semantic gravity of neighbors.
In line with its shallow understanding of information, NIEM’s dictionary only caters for a lexicon of core standalone items associated with type descriptions to be directly implemented by information systems. But due to the absence of thesaurus, the dictionary cannot tackle the semantics of overlapping domains: if lexicons alone can deal with one-to-one mappings of items to meanings (a), thesauruses are necessary for shared (b) or alternative (c) mappings.
Shared or alternative meanings cannot be managed with lexicons
With regard to shared mappings (b), distinct lexical items (e.g qualification) have to be mapped to the same entity (e.g person). Whereas some shared features (e.g person’s birth date) can be unequivocally understood across domains, most are set through shared (professional qualification), institutional (university diploma), or specific (enterprise course) domains .
Conversely, alternative mappings (c) arise when the same lexical items (e.g “mole”) can be interpreted differently depending on context (e.g plastic surgeon, farmer, or secret service).
Whereas lexicons may be sufficient for the use of lexical items across domains (namespaces in NIEM parlance), thesauruses are necessary if meanings (as opposed to uses) are to be set across domains. But thesauruses being just tools are not sufficient by themselves to deal with overlapping semantics. That can only be achieved through a conceptual distinction between lexical and semantic envelops.
Meanings: Lexical Items & Semantics
NIEM’s dictionary organize names depending on namespaces and relationships:
Namespaces: core (e.g Person) or specific (e.g Subject/Justice).
Relationships: types (Counselor/Person) or properties (e.g PersonBirthDate).
NIEM’s Lexicon: Core (a) and specific (b) and associated core (c) and specific (d) properties
But since lexicons know only names, the organization is not orthogonal, with lexical items mapped indifferently to types and properties. The result being that, deprived of reasoned guidelines, lexical items are chartered arbitrarily, e.g:
Based on core PersonType, the Justice namespace uses three different schemes to define similar lexical items:
“Counselor” is described with core PersonType.
“Subject” and “Suspect” are both described with specific SubjectType, itself a sub-type of PersonType.
“Arrestee” is described with specific ArresteeType, itself a sub-type of SubjectType.
Based on core EntityType:
The Human Services namespace bypasses core’s namesake and introduces instead its own specific EmployerType.
The Biometrics namespace bypasses possibly overlapping core Measurer and BinaryCaptured and directly uses core EntityType.
Lexical items are chartered arbitrarily
Lest expanding lexical items clutter up dictionary semantics, some rules have to be introduced; yet, as noted above, these rules should be limited to information exchange and stop short of knowledge management.
Usage: Roots and Aspects
As far as information exchange is concerned, dictionaries have to deal with lexical and semantic meanings without encroaching on ontologies or knowledge representation. In practice that can be best achieved with dictionaries organized around roots and aspects:
Roots and structures (regular, black triangles) are used to anchor information units to business environments, source or destination.
Aspects (italics, white triangles) are used to describe how information units are understood and used within business environments.
Information exchanges are best supported by dictionaries organized around roots and aspects
As it happens that distinction can be neatly mapped to core concepts of software engineering.
P.S. Thesauruses & Ontologies
Ontologies are systematic accounts of existence for whatever is considered, in other words some explicit specification of the concepts meant to make sense of a universe of discourse. From that starting point three basic observations can be made:
Ontologies are made of categories of things, beings, or phenomena; as such they may range from simple catalogs to philosophical doctrines.
Ontologies are driven by cognitive (i.e non empirical) purposes, namely the validity and consistency of symbolic representations.
Ontologies are meant to be directed at specific domains of concerns, whatever they can be: politics, religion, business, astrology, etc.
With regard to models, only the second one puts ontologies apart: contrary to models, ontologies are about understanding and are not supposed to be driven by empirical purposes.
On that basis, ontologies can be understood as thesauruses describing galaxies of concepts (stars) and features (planets) held together by semantic gravitation weighted by similarity or proximity. As such ontologies should be NIEM’s tool of choice.
Sometimes the future is best seen through rear-view mirrors; given the advances of artificial intelligence (AI) in 2016, hindsight may help for the year to come.
Deep Mind Learning (J.Bosh)
Deep Learning & the Depths of Intelligence
Deep learning may not have been discovered in 2016 but Google’s AlphaGo has arguably brought a new dimension to artificial intelligence, something to be compared to unearthing the spherical Earth.
As should be expected for machines capabilities, artificial intelligence has for long been fettered by technological handcuffs; so much so that expert systems were initially confined to a flat earth of knowledge to be explored through cumbersome sets of explicit rules. But exponential increase in computing power has allowed neural networks to take a bottom-up perspective, mining for implicit knowledge hidden in large amount of raw data.
Like digging tunnels from both extremities, it took some time to bring together top-down and bottom-up schemes, namely explicit (rule-based) and implicit (neural network-based) knowledge processing. But now that it comes to fruition, the alignment of perspectives puts a new light on the cognitive and social dimensions of intelligence.
Intelligence as a Cognitive Capability
Assuming that intelligence is best defined as the ability to solve problems, the first criterion to consider is the type of input (aka knowledge) to be used:
Explicit: rational processing of symbolic representations of contexts, concerns, objectives, and policies.
Implicit: intuitive processing of factual (non symbolic) observations of objects and phenomena.
That distinction is broadly consistent with the one between humans, seen as the sole symbolic species with the ability to reason about explicit knowledge, and other animal species which, despite being limited to the processing of implicit knowledge, may be far better at it than humans. Along that understanding, it would be safe to assume that systems with enough computing power will sooner or later be able to better the best of animal species, in particular in the case of imperfect inputs.
Intelligence as a Social Capability
Alongside the type of inputs, the second criterion to be considered is obviously the type of output (aka solution). And since classifications are meant to be built on purpose, a typology of AI outcomes should focus on relationships between agents, humans or otherwise:
Self-contained: problem-solving situations without opponent.
Competitive: zero-sum conflictual activities involving one or more intelligent opponents.
Collaborative: non-zero-sum activities involving one or more intelligent agents.
That classification coincides with two basic divides regarding communication and social behaviors:
To begin with, human behavior is critically different when interacting with living species (humans or animals) and machines (dumb or smart). In that case the primary factor governing intelligence is the presence, real or supposed, of beings with intentions.
Then, and only then, communication may take different forms depending on languages. In that case the primary factor governing intelligence is the ability to share symbolic representations.
A taxonomy of intelligence with regard to cognitive (reason vs intuition) and social (symbolic vs non-symbolic) capabilities may help to clarify the role of AI and the importance of deep learning.
Between Intuition and Reason
Google’s AlphaGo astonishing performances have been rightly explained by a qualitative breakthrough in learning capabilities, itself enabled by the two quantitative factors of big data and computing power. But beyond that success, DeepMind (AlphaGo’s maker) may have pioneered a new approach to intelligence by harnessing both symbolic and non symbolic knowledge to the benefit of a renewed rationality.
Perhaps surprisingly, intelligence (a capability) and reason (a tool) may turn into uneasy bedfellows when the former is meant to include intuition while the latter is identified with logic. As it happens, merging intuitive and reasoned knowledge can be seen as the nexus of AlphaGo decisive breakthrough, as it replaces abrasive interfaces with smart full-duplex neural networks.
Intelligent devices can now process knowledge seamlessly back and forth, left and right: borne by DeepMind’s smooth cognitive cogwheels, learning from factual observations can suggest or reinforce the symbolic representation of emerging structures and behaviors, and in return symbolic representations can be used to guide big data mining.
From consumers behaviors to social networks to business marketing to supporting systems, the benefits of bridging the gap between observed phenomena and explicit causalities appear to be boundless.