Boost Your Mind Mapping

Preamble

Turning thoughts into figures faces the intrinsic constraint of dimension: two dimensional representations cannot cope with complexity.

van der Straet, Jan, 1523-1605; A Natural Philosopher in His Study
Making his mind about knowledge dimensions: actual world, descriptions, and reproductions (Jan van der Straet)

So, lest they be limited to flat and shallow thinking, mind cartographers have to introduce the cognitive equivalent of geographical layers (nature, demography, communications, economy,…), and archetypes (mountains, rivers, cities, monuments, …)

Nodes: What’s The Map About

Nodes in maps (aka roots, handles, …) are meant to anchor thinking threads. Given that human thinking is based on the processing of symbolic representations, mind mapping is expected to progress wide and deep into the nature of nodes: concepts, topics, actual objects and phenomena, artifacts, partitions, or just terms.

Mindmap00
What’s The Map About

It must be noted that these archetypes are introduced to characterize symbolic representations independently of domain semantics.

Connectors: Cognitive Primitives

Nodes in maps can then be connected as children or siblings, the implicit distinction being some kind of refinement for the former, some kind of equivalence for the latter. While such a semantic latitude is clearly a key factor of creativity, it is also behind the poor scaling of maps with complexity.

A way to frame complexity without thwarting creativity would be to define connectors with regard to cognitive primitives, independently of nodes’ semantics:

  • References connect nodes as terms.
  • Associations: connect nodes with regard to their structural, functional, or temporal proximity.
  • Analogies: connect nodes with regard to their structural or functional similarities.

At first, with shallow nodes defined as terms, connections can remain generic; then, with deeper semantic levels introduced, connectors could be refined accordingly for concepts, documentation, actual objects and phenomena, artifacts,…

Mindmap11
Connectors are aligned with basic cognitive mechanisms of metonymy (associations) and analogy (similarities)

Semantics: Extensional vs Intensional

Given mapping primitives defined independently of domains semantics, the next step is to take into account mapping purposes:

  • Extensional semantics deal with categories of actual instances of objects or phenomena.
  • Intensional semantics deal with specifications of objects or phenomena.

That distinction can be applied to basic semantic archetypes (people, roles, events, …) and used to distinguish actual contexts, symbolic representations, and specifications, e.g:

Mindmap20xi
Extensions (full border) are about categories of instances, intensions (dashed border) are about specifications
  • Car (object) refers to context, not to be confused with Car (surrogate) which specified the symbolic counterpart: the former is extensional (actual instances), the latter intensional (symbolic representations)
  • Maintenance Process is extensional (identified phenomena), Operation is intensional (specifications).
  • Reservation and Driver are symbolic representations (intensional), Person is extensional (identified instances).

It must be reminded that whereas the choice is discretionary and contingent on semantic contexts and modeling purposes (‘as-it-is’ vs ‘as-it-should-be’), consequences are not because the choice is to determine abstraction semantics.

For example, the records for cars, drivers, and reservations are deemed intensional because they are defined by business concerns. Alternatively, instances of persons and companies are defined by contexts and therefore dealt with as extensional descriptions.

Abstractions: Subsets & Sub-types

Thinking can be characterized as a balancing act between making distinctions and managing the ensuing complexity. To that end, human edge over other animal species is the use of symbolic representations for specialization and generalization.

That critical mechanism of human thinking is often overlooked by mind maps due to a confused understanding of inheritance semantics:

  • Strong inheritance deals with instances: specialization define subsets and generalization is defined by shared structures and identities.
  • Weak inheritance deals with specifications: specialization define sub-types and generalization is defined by shared features.
Mindmap30
Inheritance semantics: shared structures (dark) vs shared features (white)

The combination of nodes (intension/extension) and inheritance (structures/features) semantics gives cartographers two hands: a free one for creative distinctions, and a safe one for the ensuing complexity. e.g:

  • Intension and weak inheritance: environments (extension) are partitioned according to regulatory constraints (intension); specialization deals with subtypes and generalization is defined by shared features.
  • Extension and strong inheritance: cars (extension) are grouped according to motorization; specialization deals with subsets and generalization is defined by shared structures and identities.
  • Intension and strong inheritance: corporate sub-type inherits the identification features of type Reservation (intension).

Mind maps built on these principles could provide a common thesaurus encompassing the whole of enterprise data, information and knowledge.

Intelligence: Data, Information, Knowledge

Considering that mind maps combine intelligence and cartography, they may have some use for enterprise architects, in particular with regard to economic intelligence, i.e the integration of information processing, from data mining to knowledge management and decision-making:

  • Data provide the raw input, without clear structures or semantics (terms or aspects).
  • Categories are used to process data into information on one hand (extensional nodes), design production systems on the other hand (intensional nodes).
  • Abstractions (concepts) makes knowledge from information by putting it to use.

Conclusion

Along that perspective mind maps could serve as front-ends for enterprise architecture ontologies, offering a layered cartography that could be organized according to concerns:

Enterprise architects would look at physical environments, business processes, and functional and technical systems architectures.

mups_Layers
Using layered maps to visualize enterprise architectures

Knowledge managers would take a different perspective and organize the maps according to the nature and source of data, information, and knowledge.intelligence w

mups_Ontos
Using layered maps to build economic intelligence

As demonstrated by geographic information systems, maps built on clear semantics can be combined to serve a wide range of purposes; furthering the analogy with geolocation assistants, layered mind maps could be annotated with punctuation marks (e.g ?, !, …) in order to support problem-solving and decision-making.

Further Reading

External Links

Economic Intelligence & Semantic Galaxies

Given the number and verbosity of alternative definitions pertaining to enterprise and systems architectures, common sense would suggest circumspection if not agnosticism. Instead, fierce wars are endlessly waged for semantic positions built on sand hills bound to crumble under the feet of whoever tries to stand defending them.

Nature & Nurture (Wang Xingwei)

Such doomed attempts appear to be driven by a delusion seeing concepts as frozen celestial bodies; fortunately, simple-minded catalogs of unyielding definitions are progressively pushed aside by the will to understand (and milk) the new complexity of business environments.

Business Intelligence: Mapping Semantics to Circumstances

As long as information systems could be kept behind Chinese walls semantic autarky was of limited consequences. But with enterprises’ gates collapsing under digital flows, competitive edges increasingly depend on open and creative business intelligence (BI), in particular:

  • Data understanding: giving form and semantics to massive and continuous inflows of raw observations.
  • Business understanding: aligning data understanding with business objectives and processes.
  • Modeling: consolidating data and business understandings into descriptive, predictive, or operational models.
  • Evaluation: assessing and improving accuracy and effectiveness of understandings with regard to business and decision-making processes.
BI: Mapping Semantics to Circumstances

Since BI has to take into account the continuity of enterprise’s objectives and assets, the challenge is to dynamically adjust the semantics of external (business environments) and internal (objects and processes) descriptions. That could be explained in terms of gravitational semantics.

Semantic Galaxies

Assuming concepts are understood as stars wheeling across unbounded and expanding galaxies, semantics could be defined by gravitational forces and proximity between:

  • Intensional concepts (stars) bearing necessary meaning set independently of context or purpose.
  • Extensional concepts (planets) orbiting intensional ones. While their semantics is aligned with a single intensional concept, they bear enough of their gravity to create a semantic environment.

On that account semantic domains would be associated to stars and their planets, with galaxies regrouping stars (concepts) and systems (domains) bound by gravitational forces (semantics).

Galax_00
Conceptual Stars & Planets

Semantic Dimensions & the Morphing of Concepts

While systems don’t leave much, if any, room for semantic wanderings, human languages are as good as they can be pliant, plastic, and versatile. Hence the need for business intelligence to span the stretch between open and fuzzy human semantics and systems straight-jacketed modeling languages.

That can be done by framing the morphing of concepts along Zachman’s architecture description: intensional concepts being detached of specific contexts and concerns are best understood as semantic roots able to breed multi-faceted extensions, to be eventually coerced into system specifications.

Galax_Dims
Framing concepts metamorphosis along Zachman’s architecture dimensions

Managing The Alignment of Planets

As stars, concepts can be apprehended through a mix of reason and perception:

  • Figured out from a conceptual void waiting to be filled.
  • Fortuitously discovered in the course of an argument.

The benefit in both cases would be to delay verbal definitions and so to avoid preempted or biased understandings: as for the Schrödinger’s cat, trying to lock up meanings with bare words often breaks their semantic integrity, shattering scraps in every direction.

In contrast, semantic orbits and alignments open the door to the dynamic management of overlapping definitions across conceptual galaxies. As it happens, that new paradigm could be a game changer for enterprise architecture and knowledge management.

From Business To Economic Intelligence

Traditional approaches to information systems and knowledge management are made obsolete by the combination of digital environments, big data, and the spreading of artificial brains with deep learning abilities.

To cope with these changes enterprises have to better integrate business intelligence with information systems, and that cannot be achieved without redefining the semantic and functional dimensions of enterprise architectures.

Semantic dimensions deal with contexts and domains of concerns: statutory, business, organization, engineering. Since these dimensions are by nature different, their alignment has to be managed; that can be achieved with conceptual galaxies and profiled ontologies.

Functional dimensions are to form the backbone of enterprise architectures, their primary purpose being the integration of data analytics, information processing, and decision-making. That approach, often labelled as economic intelligence, defines data, information, and knowledge, respectively as resources, assets, and service:

  1. Resources: data is captured through continuous and heterogeneous flows from a wide range of sources.
  2. Assets: information is built by adding identity, structure, and semantics to data.
  3. Services: knowledge is information put to use through decision-making.
CaKe_DataInfoKnow
Economic Intelligence: Bringing data mining, information processing, and knowledge management into a single conceptual framework

An ontological kernel has been developed as a Proof of Concept using Protégé/OWL 2; a beta version is available for comments on the Stanford/Protégé portal with the link: Caminao Ontological Kernel (CaKe).

Examples

Data

Wikipedia: Any sequence of one or more symbols given meaning by specific act(s) of interpretation; requires interpretation to become information.

Merriam-Webster: Factual information such as measurements or statistics; information in digital form that can be transmitted or processed; information and noise from a sensing device or organ that must be processed to be meaningful.

Cambridge Dictionary: Information, especially facts or numbers; information in an electronic form that can be stored and used by a computer.

Collins: Information that can be stored and used by a computer program.

TOGAF: Basic unit of information having a meaning and that may have subcategories (data items) of distinct units and values.

Galax_DataInfo

System

Wikipedia: A regularly interacting or interdependent group of items forming a unified whole; Every system is delineated by its spatial and temporal boundaries, surrounded and influenced by its environment, described by its structure and purpose and expressed in its functioning.

Merriam-Webster: A regularly interacting or interdependent group of items forming a unified whole

Business Dictionary: A set of detailed methods, procedures and routines created to carry out a specific activity, perform a duty, or solve a problem; organized, purposeful structure that consists of interrelated and interdependent elements.

Cambridge Dictionary: A set of connected things or devices that operate together

Collins Dictionary: A way of working, organizing, or doing something which follows a fixed plan or set of rules; a set of things / rules.

TOGAF: A collection of components organized to accomplish a specific function or set of functions (from ISO/IEC 42010:2007).

Further Reading

External Links

Operational Intelligence & Decision Making

Preamble

According to a leading tools provider operational intelligence (OI) is the ability to “discover and analyze relationships between business events and corresponding IT events”.

Sigmar Polke
Operational Decision-making (Sigmar Polke)

From a marketing perspective, the moniker suggests some kind of cross-breeding between operational research, artificial intelligence, and real-time analytics. Yet, behind vendor dressing, problems and policies remain the ones traditionally dealt with by decision-making and knowledge management, and as far as marketing is concerned, pitches will hardly affect the assessment of field professionals.

Nevertheless, functional pitches may have a deeper influence if they try to outline the aims of operational intelligence to the people directly involved, affecting the way problems are understood and dealt with. That may be the case if business and system events are seemed to be put on a par: overlooking the directed dependency between actual events and their systems counterparts can critically hamper the very capabilities of systems decision-making.

Facts, Data, & Information

The new connected world of human brains and smart things have scaled down space and time by orders of magnitude, up to the point that events seem to come out as soon as they happen, wherever that may be. Facts and updates, that once were incoming as discrete and manageable batches of information, are now bursting continuously and massively as seamless streams of data that have to be processed on-the-fly into information lest they be cannibalized by ambient noise. That new configuration blurs the distinction between operational data (pushed, shallow, transient) and underlying information (pulled, deep, persistent), making it unworkable, if not meaningless altogether.

Taking inventories decisions as an example, traditional schemes rely on periodic readings of actual inventories and sales crossed with market foresight. Now, with on-line sales and the internet of things, real-time data can be used to build on-the-fly indicators whose biases and inaccuracy would be dynamically readjusted on the basis of information built on hindsight. At any given time (t), decision-makers will be presented with actual observations (a),  initial estimations of previous observations (b1, b2), and revised estimations of previous observations (c).

DataInfoKnow_OpIntel0
At any given time (t), decision-makers are presented with actual observations (a), initial estimations of previous observations (b1, b2), and revised estimations of previous observations (c).

Set along this framework, the debate about big data can be misleading as it puts the focus on the quantity of data feeding the processes, overlooking the process itself and the distinction between data, information, and knowledge.

Information, Knowledge, & Decision-making

Generally speaking, the distinction between data and information can be set with reference to time and context, data being instant and standalone, and information associated to a shelf life and domain. With regard to decision-making, it would mean that data can be directly used within the context of the current activities and circumstances; e.g, whereas on-line sales data may (or may not) be directly (i.e despite inaccuracies and biases) used to allocate inventories across depots, it has to be “mined” into consolidated information before being used in the broader perspective of inventories planning.

Compared to the transition between data and information, which is carried out by adding time and context, the one between information and knowledge is best understood in terms of decision-making.

Information is obtained by anchoring data to time-frames and contexts, knowledge is acquired by putting information to use.
Information is obtained by anchoring data to time-frames and contexts, knowledge is acquired by putting information to use.

Decisions are best defined as commitments set against some unknown circumstances: somebody, somewhere, or sometime. First, it ensues that decision-making calls for specific and timed information that has to be maintained up-to-date until decisions are taken. Then, taking decisions introduces some irreversible change in the state of affairs or expectations, making potentially obsolete all relevant information. So it may be argued that decisions is what transform information into knowledge.

Operational Intelligence: Objectives & Tools

Assuming decisions mark the nexus between information and knowledge, operational intelligence could be defined as the ability to put information to use, that ability being supported by the analysis of the relationships between business events and corresponding IT events.

Far from being academic, that distinction is essentially pragmatic as it marks the boundary between OI objectives and tools capabilities:

  • The aim of OI is to make sense (and profit) from the dynamic relationship between business (aka external) events on one hand, business objectives and enterprise capabilities on the other hand.
  • The role of supporting tools is to define and manage IT (aka internal) events used to reflect external ones and analyze them.
Whereas business events (red) represent change in the state of affairs, IT events (blue) only represent changes in associated information.
Whereas business events (red) represent change in the state of affairs, IT events (blue) only represent changes in associated information.

Since IT events are artifacts built on purpose there isn’t much to discover or analyze about them; not to mention the fact that confusing business events and their IT shadows is bound to undermine the whole decision-making process. So what is at stake for OI is how to design IT events as to timely and accurately trail the relevant business events.

Operational Intelligence & Actual Knowledge

As already noted, operational intelligence (OI) is about decision-making, which entails changing the state of objects, processes, or expectations. Compared to knowledge management (KM) which may or may not be time-related, OI is inherently bound to the actual state of affairs: on one hand it relies on specific and timed information, on the other hand it renders that information obsolete when it triggers decisions.

At the risk of oversimplification, operational intelligence can first be understood as a combination of traditional disciplines:

  • Data-mining is to filter facts and events, capture data, and analyze it into information.
  • Knowledge management chart information with regard to business objectives and enterprise capabilities.
  • Decision-making manage time-stamps and plan commitments subject to accuracy and likelihood.

But the specificity of operational intelligence is to be found in the way these functions are intertwined and cross-fed by operational concerns.

To begin with, data mining can be dynamically adjusted depending on what is needed for decision-making, and when. As a corollary, with the benefits of data so cooked in advance, some decisions can be taken directly, bypassing the mediation (and delays) of information processing. From a cognitive point of view that would be the equivalent of non symbolic (aka implicit) knowledge to be processed by neuronal networks.

Parceling out OI objectives
Decision-making and differentiated knowledge management

Conversely, information processing could benefit from operational feedback so that knowledge management would be driven by business value, and the supporting information weighted by timing and shelf-life considerations. Whereas part of it could be done through implicit connections, it would be more comprehensively and explicitly achieved through symbolic representations.

Operational Intelligence: Signals vs Symbols

Assuming that intelligence is the ability to figure out situations and solve problems, one may conclude that it is inherently operational. Along the same reasoning, if knowledge is information put to use, it may be implicit as well as explicit.

Nonetheless, the merit of operational intelligence is to bring to a single functional roof symbolic and non symbolic knowledge, the former explicit, using mediation of semantic constructs and used to weight information and support managed decisions, the latter implicit, using direct associations between actual objects or phenomena, and supporting automated decisions.

Further Readings

Data Mining & Requirements Analysis

Preamble

Data mining explores business opportunities and competitive advantage, requirements analysis considers supporting applications. Both use models, the former’s are predictive and ephemeral, the latter’s descriptive (or prescriptive) and perennial.

(Andreas Gursky)
Data mining: sorting business wheat from world chaff (Andreas Gursky)

As the generalization of digitized environment calls for more integration of business and software engineering processes, understanding the relationship between data mining and requirements analysis could significantly improve processes maturity and agility.

Data vs Requirements Analysis

Nowadays the success of a wide range of enterprises critically depends on two achievements:

  1. Mapping business models to changing environments by sorting through facts, capturing the relevant data, and processing the whole into meaningful and up to date information. That can be achieved through analysis models mapping business expectations to supporting systems.
  2. Putting that information into effective use through business processes and supporting systems. That is done through systems architecture and design models meant to prescribe how to build software artifacts.

Those challenges are converging: under the pressure of markets forces and technological advances most of traditional fences between business channels and IT systems are crumbling, putting the focus on the functional integration between data mining and production systems. That’s where predictive models can help by anchoring descriptive models to moving markets and by cross-feeding analysis and operations. How that can be achieved has been the bread and butter of good corporate governance for some time, but there has been less interest for the third branch, namely how data analysis (predictive models) could “inform” business requirements (descriptive models).

From Data to Information

Facts are not given but must be captured through a symbolic description of actual observations. That entails some observer set on task using a mix of conceptual and technical apparatus. Data mining and requirements analysis are practical realizations of that process:

  • Data mining relies on analytic tools to extract revealing information that could be used to chart opportunities along business models.
  • Requirements analysis relies on business processes and users’ practice to extract symbolic descriptions that will be used to build models of supporting applications.

If both walk the path from data to information, their objectives are different: the former’s is to improve business decisions by making sense of actual observations; the latter’s is to build system surrogates from the symbolic descriptions of actual business objects and activities.

Anchors & Structures: Plasticity of  Business Entities

Perhaps paradoxically, business agility calls for terra firma because nimble trades must be rooted in corporate identity and business continuity. As a consequence, the first step of requirements analysis should be to associate individuals business objects or activities with stable and consistent identification mechanisms, and to group them with regard to that mechanism:

  • External entities with natural (person) or designed identity (car).
  • Symbolic entities for roles (customer) or commitments (maintenance contract).
  • Actual activities (promotion campaign) and events (sale) or business logic (promotion).
Anchors
Anchors

Conversely, as the aim of data analysis is to explore every business angle, individual observations are supposed to be moved across groups; yet, since the units identified by data analysis will have to be aligned with the ones described by requirements analysis, moves must also keep track of identities. That dilemma between continuity of identified structures on one side, plasticity of functional aspects on the other side, can be illustrated by banks which, in response to marketing requirements, had to shift from account (internal identification) to customer (external identification) based systems.

From account (left) to customer (right) centered systems
It’s easier to market insurance from customer centered systems (right) than from account centered ones (left)

That challenge can be overcome by linking the identification of symbolic entities to external anchors.

Profiles & Features: Versatility of Business Opportunities

As noted above, requirements and data analysis are set on the same road but driven by different forces: the former tries to group individuals with regard to identification mechanisms before fleshing them out with relevant features; the latter tries to group individuals with given identities according to features and opportunity profiles. Yet, what could appear as collision courses may become a meeting of minds if both courses are charted with regard to variants analysis.

From the requirements perspective the primary concern is to distinguish between structural and functional variants:

  • Structural variants are bound to identities, i.e set up-front for the respective life-cycle of individual business objects or transactions. As a consequence they cannot be changed without undermining business continuity. Moreover, being part and parcel of descriptors (e.g  types and use cases) their change will affect engineering processes.
  • Functional variants may vary during the respective life-cycle of individual business objects or transactions. As a consequence they can be changed without undermining business continuity, and changes in descriptors (e.g partitions and scenarii) can be managed without affecting engineering processes.

From the data mining perspective the objective is to improve the benefits of information systems for decision-making processes:

  • Static: how to classify individuals as to reduce the uncertainty of predictions
  • Dynamic: how to classify business options as to reduce the uncertainty of decisions.

Since those objectives are set for individuals, constraints on continuity and consistency can be dealt with independently of the description of symbolic surrogates.

Identified individuals with profiles for customers (a), their behaviors (b), and conciliatory gestures (c)
Identified individuals with profiles for customers (a), their behaviors (b), and promotional gestures (c)

It ensues that perspectives can be adjusted by factoring out the constraints of continuity and consistency for business objects (e.g cars), agents (e.g customer) and processes (e.g repair). Profiles for agents (a), behaviors (b), and business options (c) could then be freely explored and tailored with regard to changes in business environment and objectives.

Applying Data Analysis to Requirements

Not surprisingly data analysis techniques can be used to adjust perspectives. For that purpose a sample of individuals (business objects and operations) representing the population targeted by requirements would have to be submitted to basic mining routines. Borrowing a catalog from F. Provost & T. Fawcett:

  1. Classification: estimates the probability for each individual (objects or operations) to belong to a set of classes; can be used to assess the closeness of the variants (respectively power-types or execution paths) identified by requirements analysis.
  2. Regression: reverse classification; estimates how much of individual features valuations can be explained by the proposed classifications.
  3. Similarity: a shallow version of classification; can be used to assess the distance between variants and consolidate the proposed classifications.
  4. Clustering: a deep version of classification; can be used to distinguish between shallow and natural classifications.
  5. Co-occurrence: deals with behavioral variants; can be used to distinguish between functional and structural classifications.
  6. Profiling: reverse of co-occurrence; can be used to consolidate functional and structural classifications.
  7. Links prediction: can be used to define relationships.
  8. Data reduction: eliminate redundant individuals; can be used to consolidate requirements and refine tests scenarii.
  9. Causal modeling: brings together business logic (events and rules) and users decisions; should provide the backbone of tests scenarii.

Besides the direct benefits for requirements, such procedures may help to bridge the span between data and requirements analysis and significantly improve processes’ capability and maturity level.

Business Objectives & Enterprise Architecture Capabilities

Data mining being first and foremost about competitive edge, it relies on a timely and effective coupling between enterprises capabilities and business opportunities. But the dilemma between continuity and plasticity described above for business objects and processes reappears at enterprise level: how to conciliate architecture, by nature perennial, with the agility needed to make the best of changing and competitive environments ?

As architectural big bang is arguably a last resort option, answers to that question must be progressive and local: if changes are to be swift and pertinent they must be both circumscribed and leveraged to the relevant parts of architecture. Taking an (amended) leaf of the Zachman framework, its sixth column (“Why” ) could be reset as a line for business and operational objectives that would cross the original five columns instead of the architecture layers. Using a pentagonal representation of enterprise architecture, that line would be set as circling the outer range.

Enterprise Architecture and the loci of change

It is worth to note that setting objectives on a line crossing the columns of capabilities instead of a column crossing the lines of layers means that objectives are set at enterprise level and their cascading impact traced and managed through layers.

Conceptual Models & Business Contexts

But even that updated framework doesn’t take into account the fundamental changes in business environments. Once secure behind organizational and technical fences, enterprises must now navigate through open digitized business environments and markets. For business processes it means a seamless integration with supporting applications; for corporate governance it means keeping track of heterogeneous and changing business contexts and concerns while assessing the capability of organizations and systems to cope, adjust, and improve.

As long as environments were a hotchpotch of actual and symbolic artifacts the pros and cons of integration could be balanced. But the generalization of digital flows and transactions has upended the balance: there is no more room or time for latency and enterprises must bring all symbolic representations (business, organization, and systems) under a common conceptual roof:

OpenConcepts_00
Conceptual models as bridges between business processes, and systems.

A canonical approach would be to introduce a conceptual indexing scheme open to extensions but with its footprint defined by business processes and systems functionalities. That would ensure a better integration of processes and supporting engineering but will do nothing for the modeling gap between enterprise architecture and external contexts. That could be achieved with ontologies.

Conceptual Loop: Ontologies & Business Intelligence

As far as data mining is concerned, three kinds of operations are to be considered:

  • Data understanding gives form and semantics to raw material.
  • Business understanding charts business contexts and concerns in terms of objects and processes descriptions.
  • Modeling consolidate data and business understanding into descriptive, predictive, and operational models.
OKBI_dmProcess
The aim of data mining is to refine raw data into meaningful information

While traditional approaches fall short when tasked with iterative modeling of unstructured data, ontologies may fare better because their explicit aim is only to describe what could exist in a domain of discourse:

  1. They are made of categories of things, beings, or phenomena; as such they may range from simple catalogs to philosophical doctrines.
  2. They are driven by cognitive (i.e non empirical) purposes, namely the validity and consistency of symbolic representations.
  3. They are meant to be directed at specific domains of concerns, whatever they can be: politics, religion, business, astrology, etc.

With regard to models, only the second point puts ontologies apart: contrary to models, ontologies are about understanding and are not supposed to be driven by empirical purposes. It ensues that ontologies can be understood as conceptual (aka canonical) models, used as such for business analysis and extended with purposes for systems analysis and design.

In addition to the integration with enterprise architectures, ontologies benefits for business intelligence would be twofold.

On one side, and whatever their use, ontologies could be aligned with the nature of contexts and their impact on business and enterprise governance e.g:

  • Institutional: mandatory semantics sanctioned by regulatory authority, steady, changes subject to established procedures.
  • Professional: agreed upon semantics between parties, steady, changes subject to established procedures.
  • Corporate: enterprise defined semantics, changes subject to internal decision-making.
  • Social: definedpragmatic semantics, no authority, volatile, continuous and informal changes.
  • Personal: customary semantics defined by named individuals.
OKnow_TabOntos
Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).

On the other side ontologies could be defined according to the nature of targeted items, namely terms, documents, symbolic representations, or actual objects and phenomena. That would outline four basic concerns that may or may not be combined:

  • Thesaurus: ontologies covering terms and concepts.
  • Document Management: ontologies covering documents with regard to topics.
  • Organization and Business: ontologies pertaining to enterprise organization, objects and activities.
  • Engineering: ontologies pertaining to the symbolic representation of products and services.
KM_OntosCapabs
Ontologies: Purposes & Targets

On a broader perspective, ontologies could be influential in spanning the gap between explicit and implicit knowledge. In a few years’ time practically unlimited access to raw data and the exponential growth in computing power have opened the door to massive sources of unexplored knowledge which is paradoxically both directly relevant yet devoid of immediate meaning:

  • Relevance: mined raw data is supposed to reflect the geology and dynamics of targeted markets.
  • Meaning: the main value of that knowledge rests on its implicit nature; applying existing semantics would add little to existing knowledge.

Assuming that deep learning can transmute raw base metals into knowledge gold, ontologies would be decisive in framing the understanding, assessment, and improvement of the processes.

Operational Loop: Business Intelligence & Decision-making

Once carried out separately and periodically, decision-making is to be carried out iteratively at operational, tactical, and strategic level; while each level is to be set along its own time-frames, all are to rely on data-mining, with cycles following the same pattern:

  1. Observation: understanding of changes in business opportunities.
  2. Orientation: assessment of the reliability and shelf-life of pertaining information with regard to current positions and operations.
  3. Decision: weighting of options with regard to enterprise capabilities and broader objectives.
  4. Action: carrying out of decisions within the relevant time-frame.

Given business new playground, decision-making processes have to weave together material and digitized flows, actual contexts (aka territories) and symbolic descriptions (maps), and overlapping time-frames (operational tactical, strategic). That operational loop could then be coupled with the broader one of business intelligence:

OKBI_BIDM
Integration of  operational analytics, business intelligence, and decision-making.

Selected Readings

Enterprise Governance & Knowledge

Knowledgeable Processes

While turf wars may play a part, the debate about Enterprise and Systems governance is rooted in a more serious argument, namely, how the divide between enterprise and systems architectures may affect decision-making.

antin-art-2007-judgment-of-paris.jpg
Informed Decision_making (Eleanor Antin)

The answer to that question can be boiled down to four guidelines respectively for capabilities, functionalities, visibility, and uncertainty.

Architecture Capabilities

From an architecture perspective, enterprises are made of human agents, devices, and symbolic (aka information) systems. From a business perspective, processes combine three kinds of tasks:

  • Authority: deciding how to perform processes and make commitments in the name of the enterprise. That can only be done by human agents, individually or collectively.
  • Execution: processing physical or symbolic flows between the enterprise and its context. Any of those can be done by human agents, individually or collectively, or devices and software systems subject to compatibility qualifications.
  • Control: recording and checking the actual effects of processes execution. Any of those can be done by human agents, individually or collectively, some by software systems subject to qualifications, and none by devices.

Hence, and whatever the solutions, the divide between enterprise and systems will have to be aligned on those constraints:

  • Platforms and technology affects operational concerns, i.e physical accesses to systems and the where and when of processes execution.
  • Enterprise organization determines the way business is conducted: who is authorized to what (business objects), and how (business logic).
  • System functionalities sets the part played by systems in support of business processes.
Capabs_L1
Enterprise Architecture Capabilities

That gives the first guideline of systems governance:

Guideline #1 (capabilities): Objectives and roles must be set at enterprise level, technical constraints about deployment and access must be defined at platform level, and functional architecture must be designed as to get the maximum of the former subject to the  latter’s constraints.

Informed Decisions: The Will to Know

At its core, enterprise governance is about decision-making and on that basis the purpose of systems is to feed processes with the relevant information so that agents can be put it to use as knowledge.

Those flows can be neatly described by crossing the origin of data (enterprise, systems, platforms) with the processes using the information (business, software engineering, services management):

  • Information processing begins with data, which is no more than registered facts: texts, numbers, sounds, visuals, etc. Those facts are collected by systems through the execution of business, engineering, and servicing processes; they reflect the state of business contexts, enterprise, and platforms.
  • Data becomes information when comprehensively and consistently anchored to identified constituents (objects, activities, events,…) of contexts, organization, and resources.
  • Information becomes knowledge when put to use by agents with regard to their purpose: business, engineering, services.
ccc
Information processing: from data to knowledge and back

Along that perspective, capabilities can be further refined with regard to decision-making.

  • Starting with business logic one should factor out decision points and associated information. That will determine the structure of symbolic representations and functional units.
  • Then, one may derive decision-making roles, together with implicit authorizations and access constraints. That will determine the structure of I/O flows and the logic of interactions.
  • Finally, the functional architecture will have to take into account synchronization and deployment constraints on events notification to and from processes.
Who should know What and When
Who should know What, Where, and When

That can be condensed into the second guideline of system governance:

Guidelines #2 (functionalities): With regard to enterprise governance, the role of systems is to collect data and process it into information organized along enterprise concerns and objectives, enabling decision makers to select and pull relevant information and translate it into knowledge.

Qualified Information: The Veils of Ignorance

Ideally, decision-making should be neatly organized with regard to contexts and concerns:

  • Contexts are set by architecture layers: enterprise organization, system functionalities, platforms technology.
  • Concerns are best identified through processes: business, engineering, or supporting services.
Qualified Information Flows across Architectures and Processes
Qualified Information Flows across Architectures and Processes

Actually, decisions scopes overlap and their outcomes are interwoven.

While distinctions with regard to contexts are supposedly built-in at the source (enterprise, systems, platforms), that’s not the case for concerns whose distinction usually calls for reasoned choices supported by different layers of governance:

  • Assets: shared decisions whose outcome bears upon several business domains and cycles. Those decisions may affect all architecture layers: enterprise (organization), systems (services), or platforms (purchased software packages).
  • Users’ Value: streamlined decisions governed by well identified business units providing for straight dependencies from enterprise (business requirements), to systems (functional requirements) and platforms (users’ entry points).
  • Non functional: shared decisions about scale and performances affecting users’ experience (organization),  engineering (technical requirements), or resources (operational requirements).
Separation of Concerns and Requirements Taxonomy
Qualified Information and Decision Making

As epitomized by non functional requirements, those layers of governance don’t necessarily coincide with the distinction between business, engineering, and servicing concerns. Yet, one should expect the nature of decisions be set prior the actual decision-making, and decision makers be presented with only the relevant information; for instance:

  • Functional requirements should be decided given business requirements and services architecture.
  • Scalability (operational requirements) should be decided with regard to enterprise’s objectives and organization.

Hence the third guideline of system governance:

Guideline #3 (visibility): Systems must feed processes with qualified information according to contexts (business, organization, platforms) and governance level (assets, user’s value, operations) of decision makers.

Risks & Opportunities: Mining Beyond Visibility

Long secure behind organizational and technical fences, enterprises must now navigate through open digitized business environments and markets. For business processes it means a seamless integration with supporting applications; for corporate governance it means keeping track of risks and opportunities in changing business contexts while assessing the capability of organizations and systems to cope, adjust, and improve.

On one hand risks and opportunities take root beyond the horizon and are not supposed to square with established information models; but on the other hand deep learning technologies is revolutionizing data analytics. purpose ontologies could be used to bring architectures and environments modeling into a common paradigm.

Hence the fourth guideline of system governance:

Guideline #4 (uncertainty): assuming that business edge is built on unqualified information about risks or opportunities, explicit and implicit knowledge should be processed within shared conceptual frames; that can be achieved with ontologies.

Further Reading

Architectures of Knowledge

“Nothing that is worth knowing can be taught” 

Oscar Wilde

Objective

As illustrated by he Symbolic Systems Program (SSP) at Stanford University, advances in computing and communication technologies bring information and knowledge systems under a single functional roof, namely the processing of symbolic representations.

Information and Knowledge:  Acquisition, Use and Reuse (R. Doisneau)

Within that understanding one will expect Knowledge Management to shadow systems architectures and concerns: business contexts and objectives, enterprise organization and operations, systems functionalities and technologies. On the other hand, knowledge being by nature a shared resource of reusable assets, its organization should support the needs of its different users independently of the origin and nature of information. Knowledge Management should therefore bind knowledge of architectures with architecture of knowledge.

Knowledge Representation

In  their pivotal article Davis, Shrobe, and Szolovits set five principles for knowledge representation:

  1. Surrogate: KR provides a symbolic counterpart of actual objects, events and relationships.
  2. Ontological commitments: a KR is a set of statements about the categories of things that may exist in the domain under consideration.
  3. Fragmentary theory of intelligent reasoning: a KR is a model of what the things can do or can be done with.
  4. Medium for efficient computation: making knowledge understandable by computers is a necessary step for any learning curve.
  5. Medium for human expression: one the KR prerequisite is to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.
Representation_Mutilation
Surrogates without Ontological Commitment

The only difference is about coupling: contrary to knowledge systems, information and control ones play a role in their context, and operations on surrogates are not neutral.

Knowledge Archeology

Knowledge constructs are empty boxes that must be properly filled with facts. But facts are not given but must be observed, which necessarily entails some observer, set on task if not with vested interests, and some apparatus, natural or made on purpose.  And if they are to be recorded, even “pure” facts observed through the naked eyes of innocent children will have to be translated into some symbolic representation. Taking wind as an example, wind socks support immediate observation of facts, free of any symbolic meaning. In order to make sense of their behaviors, wanes and anemometers are necessary, respectively for azimuth and speed; but that also requires symbolic frameworks for directions and metrics. Finally, knowledge about the risks of strong winds can be added when such risks must be considered.

windMonit
Fact, Information, Knowledge

As far as enterprises are concerned, knowledge boxes are to be filled with facts about their business context and processes, organization and applications, and technical platforms. Some of them will be produced internally, others obtained from external sources, but all should be managed independently of specific purposes. Whatever their nature (business, organization or systems), information produced by the enterprises themselves is, from inception, ready to use, i.e organized around identified objects or processes, with defined structures and semantics. That’s not necessarily the case with data reflecting external contexts (markets, regulations, technology, etc) which must be mapped to enterprise concerns and objectives before being of any use. That translation of data into information may be done immediately by mapping data semantics to identified objects and processes; it may also be delayed, with rough data managed as such until being used at a later stage to build information.

Knowa_cycle
From Data to Knowledge

From Data to Information

Information is meaningful, data is not. Even “facts” are not manna from heaven but have to be shaped from phenomena into data and then information, as epitomized by binary, fragmented, or “big” data.

  • Binary data are direct recording of physical phenomena, e.g sounds or images; even when indexed with key words they remain useless until associated, as non symbolic features, to identified objects or activities.
  • Contrary to binary data, fragmented data comes in symbolic guise, but as floating nuggets with sub-level granularity; and like their binary cousin, those fine-grained descriptions are meaningless until attached to identified objects or activities.
  • “Big” data is usually understood in terms of scalability, as it refers to lumps too large to be processed individually. It can also be defined as a generalization of fragmented data, with identified targets regrouped into more meaningful aggregates, moving the targeted granularity up the scale to some “overwhelming” level.

Since knowledge can only be built from symbolic descriptions, data must be first translated into information made of identified and structured units with associated semantics. Faced with “rough” (aka unprocessed) data, knowledge managers can choose between two policies: information can be “mined” from data using statistical means, or the information stage simply bypassed and data directly used (aka interpreted) by “knowledgeable” agents according to their context and concerns.

Non symbolic data can be interpreted by “knowledgeable” agents according to their concerns.

As a matter of fact, both policies rely on knowledgeable agents, the question being who are the “miners” and what they should know. Theoretically, miners could be fully automated tools able to extract patterns of relevant information from rough data without any prior information; practically, such tools will have to be fed with some prior “intelligence” regarding what should be looked for, e.g samples for neuronal networks, or variables for statistical regression. Hence the need of some kind of formats, blueprints or templates that will help to frame rough data into information.

Information Properties

Knowledge must be built from accurate and up-to-date information regarding external and internal state of affairs, and for that purpose information items must be managed according to their source, nature, life-cycle, and relevancy:

  • Source: Government and administrations, NGO, corporate media, social media, enterprises, systems, etc.
  • Nature: events, decisions, data, opinions, assessments, etc.
  • Type of anchor: individual, institution, time, space, etc.
  • Life-cycle: instant, time-related, final.
  • Relevancy: traceability with regards to business objectives, business operations, organization and systems management.
Knowa_canon
Information must be timely, understandable, and relevant

On that basis, knowledge management will have to map knowledge to its information footprint in terms of reliability (source, accuracy, consistency, obsolescence, etc) and risks.

From Information to Knowledge

Information is meaningful, knowledge is also useful. As information models, knowledge representations must first be anchored to persistency and execution units in order to support the consistency and continuity of surrogates identities (principle #1). Those anchors are to be assigned to domains managed by single organizational units in charge of ontological commitments, and enriched with structures, features, and associations (principle #2). Depending on their scope, structure or feature, semantics are to be managed respectively by persistent or application domains. Likewise, ontologies may target objects or aspects, the former being associated with structural sub-types, the latter with functional ones. Differences between information models and knowledge representation appear with rules and constraints. While the objective of information and control systems is to manage business objects and activities, the purpose of knowledge systems is to manage symbolic contents independently of their actual counterparts (principle #3). Standard rules used in system modelling describe allowed operations on objects, activities and associated information; they can be expressed forward or backward:

  • Forward (aka push) rules are conditions on when and how operations are to be performed.
  • Backward (aka pull) rules are constraints on the consistency of symbolic representations or on the execution of operations.
PtrnRules_typok
Standard Rules

Assuming a continuity between information and knowledge representations, the inflection point would be marked by the introduction of modalities used to qualified truth values,  e.g according temporal and fuzzy logic:

  • Temporal extensions will put time stamps on truth values of information.
  • Fuzzy logic put confidence levels on truth values of information.

That is where knowledge systems depart from information and control ones as they introduce a new theory of intelligent reasoning, one based upon the fluidity and volatility of knowledge.

Meanings are in the Hands of Beholders

Seen in a corporate context, knowledge can be understood as information framed by contexts and driven by purposes: how to run a business, how to develop applications, how to manage systems. Hence the dual perspective: on one hand information is governed by enterprise concerns, systems functionalities, and platforms technology; on the other hand knowledge is driven by business processes, systems engineering, and services management.

Knowa_Archis
Knowledge of Architectures, Architecture of Knowledge.

That provides a clear and comprehensive taxonomy of artifacts, to be used to build knowledge from lower layers of information and data:

  • Business analysts have to know about business domains and activities, organization and applications, and quality of service.
  • System engineers have to know about projects, systems functionalities and platform implementations.
  • System managers have to know about locations and operations, services, and platform deployments.

The dual perspective also points to the dynamics of knowledge, with information being pushed by the their sources, and knowledge being pulled by their users.

A Time for Every Purpose

As understood by Cybernetics, enterprises are viable systems whose success depends on their capacity to countermand entropy, i.e the progressive downgrading of the information used to govern interactions both within the organization itself and with its environment. Compared to architecture knowledge, which is organized according to information contents, knowledge architecture is organized according to  functional concerns and information lifespan, and its objective is to keep internal and external information in synch:

  • Planning of business objectives and requirements (internal) relative to markets evolutions and opportunities (external).
  • Assessment of organizational units and procedures (internal) in line with regulatory and contractual environments (external).
  • Monitoring of operations and projects (internal) together with sales and supply chains (external).
Knowa_DecisLevels2
Knowledge Architecture and Shearing Layers: strategy at leisure, time for plans, real-time operations.

That put meanings (that would be knowledge) in the hands of decision makers, respectively for corporate strategy, organization, and operations. Moreover, enterprises being living entities, lifespan and functional sustainability are meant to coalesce into consistent and homogenous layers:

  • Enterprise (aka business, aka strategic) time-scales are defined by environments, objectives, and investment decisions.
  • Organization (aka functional) time-scales are set by availability, versatility, and adaptability of resources
  • Operational time-scales are determined by process features and constraints.

Such a congruence of time-scales, architectures and purposes into Shearing Layers is arguably a key success factor of Knowledge management.

Search and Stretch

As already noted, knowledge is driven by purposes, and purposes, not being confined to domains or preserves, are bound to stretch knowledge across business contexts and organizational boundaries. That can be achieved through search, logic, and classification.

  • Searches collect the information relevant to users concerns (1). That may satisfy all the knowledge needs, or provide a backbone for further extension.
  • Searches can be combined with ontologies (aka classifications) that put the same information under new lights (1b).
  • Truth-preserving operations using mathematics or formal languages can be applied to produce derived information (2).
  • Finally, new information with reduced confidence levels can be produced through statistical processing (3,4).

For instance, observed traffic at toll roads (1) is used for accounting purposes (2), to forecast traffic evolution (3), to analyze seasonal trends (1b) and simulate seasonal and variable tolls (4).

Knowa_Stretch
Observed facts (1), deductions (2), projections (3), transposition (1b) and hypothesis (4).

Those operations entail clear consequences for knowledge management: As far as computational distances don’t affect confidence levels, truth-preserving operations are neutral with regard to KM. Classifications are symbolic tools designed on purpose; as a consequence all knowledge associated to a classification should remain under the responsibility of its designer. Challenges arise when confidence levels are affected, either directly or through obsolescence. And since decision-making is essentially about risks management, dealing with partial or unreliable information cannot be avoided. Hence the importance of managing knowledge along shearing layers, each with its own information life-cycle, confidence requirements, and decision-making rules.

From Knowledge Architecture to Architecture Capability

Knowledge architecture is the corporate central nervous system, and as such it plays a primary role in the support of operational and managerial processes. That point is partially addressed by Frameworks like Zachman whose matrix organizes Information System Architecture (ISA) along capabilities and design levels. Yet, as illustrated by the design levels, the focus remains on information technology without explicitly addressing the distinction between enterprise, systems, and platforms.

Capabilities can be defined across architecture layers with regard to business, engineering, and operational processes

That distinction is pivotal because it governs the distinction between corresponding processes, namely business processes, systems engineering, and services managements. And once the distinction is properly established knowledge architecture can be aligned with processes assessment.

Yet that will not be enough now that digital environments are invading enterprise systems, blurring the distinction between managed information assets and the continuous flows of big data.

DataMining_Capabs
The outer range anchors enterprise architecture and objectives to business environments

A way has to be found to bridge the gap between big data and enterprise information models.

Knowledge Representation & Profiled Ontologies

Faced with digital business environments, enterprise must sort relevant and accurate information out of continuous and massive inflows of data. As modeling methods cannot cope with the open range of contexts, concerns, semantics, and formats, looser schemes are needed, that’s precisely what ontologies are meant to do:

  • Thesaurus: ontologies covering terms and concepts.
  • Documents: ontologies covering documents with regard to topics.
  • Business: ontologies of relevant enterprise organization and business objects and activities.
  • Engineering: symbolic representation of organization and business objects and activities.

Ontologies: Purposes & Targets

Profiled ontologies can then be designed by combining that taxonomy of concerns with contexts, e.g:

  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to accords.
  • Corporate: Defined by enterprises, changes subject to internal decision-making.
  • Social: Defined by usage, volatile, continuous and informal changes.
  • Personal: Customary, defined by named individuals (e.g research paper).

Last but not least, external (regulatory, businesses, …) and internal (i.e enterprise architecture) ontologies could be integrated, for instance with the Zachman framework:

Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).

Using profiled ontologies to manage enterprise architecture and corporate knowledge  will help to align knowledge management with EA governance by setting apart ontologies defined externally (e.g regulations), from the ones set through decision-making, strategic (e.g plate-form) or tactical (e.g partnerships).

An ontological kernel has been developed as a Proof of Concept using Protégé/OWL 2; a beta version is available for comments on the Stanford/Protégé portal with the link: Caminao Ontological Kernel (CaKe).

From Data Analysis to Deep Learning

Set between all-inclusive onslaught of data on one side, pervasive smart bots on the other side, information systems could lose their identity and purpose. And there is a good reason for that, namely the confusion between data, information, and knowledge.

Knowledge is the ability to make differences

As it happened aeons ago, ontologies have been explicitly though up to deal with that issue.

Further Reading

External Links