Preamble
Learning is what happens when humans try to understand who they are, their environment, and their agency.
Cognitive Capabilities
Learning starts by naming things and proceeds along two paths: on the direct one people communicate, think about conversations, and use reasoning to extend their understanding; on the mediated one they try to classify facts; both paths converge through shared symbolic representations which can be used to design and build social or physical artifacts.
Enterprises as Learning Machines
Learning means pushing the edges of knowledge. For enterprises it entails a central nervous system weaving together:
- Value chains: integration of business processes and business intelligence
- Knowledge infrastructure: integration of systems and human reasoning capabilities
- Learning and decision-making: integration of knowledge motifs and motives
That can be best achieved when architecture layers are integrated with learning processes’ observation, reasoning, and judgment.
That would align physical, logical, and business layers with the processing of data (observation), information (reasoning), and knowledge (judgment).
The benefits of that three-fold alignment appear clearly when it is matched with key complementary dimensions:
- Operational: data for observations (environment), information for managed data (systems), knowledge for information put to use (people).
- Philosophical (e.g., Spinoza): data is for senses and beliefs, information for reason, knowledge for judgment.
- Practical (AI and Machine learning): data for deep learning, information for models, knowledge for graphs.
- Statutory: data for privacy, information for traceability, knowledge for accountability.
Knowledge Representation
From a functional perspective, knowledge can be defined in terms of representation and communication:
- With regard to representation, three basic options can be identified: thesaurus, models, and ontologies.
- With regard to communication, the linguistic distinction is about lexicons, syntax, semantics, and pragmatics.
Not by chance, these dimensions can be neatly combined:
- Thesauruses define the meaning of terms, categories, and concepts.
- Models add syntax to build representations of contexts and concerns.
- Ontologies add pragmatics to bridge the semantic gaps between contexts and concerns.
Interoperability can then be achieved along both dimensions using:
- A core of unambiguous concepts providing a set of semantic primitives.
- Established network representation schemes, commonly known as Resource Description Frameworks (RDF), ensuring a seamless integration of representations and meanings.
Knowledge Value Chains
Value chains are meant to chart the contributions of supporting activities to the delivery of a valuable product or service to market. Reset in digital business environments driven by data mining, value chains must take into account the way data is processed into information and put to use as knowledge.
Knowl’Edges
A primary (direct and essential) consequence of the digital revolution is to redefine the edges of value chains; all the more for knowledge driven ones which are defined by:
- The nature of flows: digital (physical environment) or symbolic (business environment)
- Privacy regulations: personal data vs managed information
- Access to resources: open-source (free to use), market priced (e.g., from data factories), proprietary (traded or priced internally)
The primary objective of a digital strategy should therefore to ensure that these edges are fully and consistently identified and managed by systems; that can be achieved with a built-in distinction between data, information, and knowledge:
- Data, from direct observations or as a product of data factories, can be neatly separated from information managed by systems — a necessary condition for compliance with privacy regulations. Data is used to establish facts.
- Information, for the subset of data that fits within the categories pertaining to business objectives (e.g., customers, accounts), can be continuously and consistently managed as systems surrogates.
- Knowledge, for individual or collective skills and expertise, can be explicitly represented in conceptual models, Knowledge graphs or ontologies.
Using ontological prisms, that value chain can support seven basic use cases:
- Requirements (actual facts)
- Data analytics (virtual facts)
- Business analysis (facts/concepts)
- Business intelligence (concepts)
- Strategic planning (concepts/categories)
- Systems engineering (categories)
- Systems modeling (facts/categories)

Knowledge Flows
To enhance enterprises’ learning capabilities, value chains should ensure both top-down and bottom-up momentums:
- Top-down (counterclockwise): planned and knowledge based, fed by anticipations, organization, and business objectives.
- Bottom-up (clockwise): emerging and data driven, fed by observations and experience.
That double alignment of value chains with intelligence (through data, information and knowledge) and enterprise architecture (through organization, systems, and platforms layers) brings clear and direct benefits for the governance of enterprises immersed in digital environments as it ensures the convergence of business and systems perspectives:

- Business/symbolic level: anchoring models of business environments and objectives to enterprise architecture
- Logical/functional level: aligning information systems and business categories
- Physical/digital level: merging observations from environments (data mining) with operational data (process mining)
With intelligent value chains anchored to data and operations, information and systems, and knowledge and organization, enterprises becoming learning machines will depend on collective knowledge and decision-making.
Knowledge Infrastructure
Enterprise knowledge management (KM) faces an intrinsic challenge: while rooted by nature into the specifics of business domains, sustainable benefits can only be achieved when it can be applied across the whole of enterprise’s undertakings. Hence the importance of ontologies in federating enterprises’ knowledge engineering and management.
The Fabric of Knowledge
Contents nature
Taking advantage of the congruence between representation (thesauruses, models, ontologies) and linguistic tiers (lexicons, syntax, semantics, pragmatics), reasoning can be tailored to the nature of contents:
- Machine learning and statistics deal with data from physical or digital environments. They can be used to build information models (syntax, semantics) and/or knowledge graphs (semantics, pragmatics).
- Truth-preserving operations apply logic to models independently of domain semantics.
- Domain-specific operations take into account contexts’ semantics and pragmatics.
Epistemic levels
Knowledge is by nature a work in progress, with edges continuously moved by learning. From a formal perspective, the issue can be defined by the closed/open world assumption: the closed-world assumption (CWA) states that whatever is true is also known to be true, in other words, what is not known is false. Conversely, the open-world assumption (OWA) states that nothing is false until proven so.
Taking a simple Parent/Children exemple, information models can deal with the fact that persons have parents, and represent the corresponding individuals and relationships, e.g.:
- Entity Person_ with instances Mike and Jerry,
- Role Parent() with standard connector (connectRef#_)
- Connectors’ cardinalities for integrity constraints
But unidentified instances and associated relationships cannot be explicitly represented by models: if Mike and Jerry are siblings, they are supposed to share a parent, but that parent (JDoe) is not necessarily identified as an instance in the database. Hence the benefit of a built-in ontological distinction between identified and induced occurrences of Person_.
Ontologies go further and use epistemic levels to take into account the open-ended modalities of truth and represent temporal or alternative realms together with built-in modal reasoning capabilities.
Reasoning
Reasoning primitives can be obtained by pairing consistently defined operations with matching inputs; such primitives can then be used to build reasoning processes across domains, organization, and systems.
A principled learning architecture could thus be organized at the enterprise level in line with fundamental modus operandi:
- Observation or experience, through data and process mining
- Reasoning, through truth-preserving or domain-specific operations
- Judgment, carried out individually or collectively
Last but not least, learning processes set across organizational units and business domains must ensure full transparency and traceability. For observation and reasoning that can be achieved at the system level, for judgment it also entails accountability at the organization level. In any case a more specific account of reasoning operations is necessary.
Threads & Motifs
Learning at the enterprise level weaves truth-preserving operations with statistical inference, heuristics, and judgments. It ensues that traceability and accountability cannot be achieved without sorting out the respective contributions to value chains; that can be done by mapping reasoning patterns to the nature of contents.
Partitions & Logic Quantifiers
Intelligence can be summarily defined as the ability to make differences, hence the importance of partitions and powertypes, used to define subsets and subtypes. Concomitantly, powertypes can also be defined through assertions using predicates and quantifiers:
- Universal quantifiers (kmUQuantifier_) mean that what is said applies to all the individuals considered
- Existential quantifiers (kmEQuantifier_) mean that what is said applies to a subset of the individuals considered
Taking licensing regulations for exemple, a distinction is made between beverages with regard to their processing represented by a powertype (Beverages2Process) realized by three instances (Fermented, Distilled, Industrial).
Quantifiers can be used in two ways:
- Between categories (Wine, Beer) and subsets (beverage2Fermented)
- Between subsets (beverage2Fermented, beverage2Distilled, beverage2Industrial) and expressions (isIntoxicating())
Powertypes and quantifiers are used to define reasoning patterns.
Reasoning Patterns
Logic distinguishes between three basic forms of reasoning: deductive, inductive, and abductive. These forms (complemented with one for negation) can be expressed in terms of data, information, and knowledge:
- Deduction: matching observations (data) with models to produce new information, i.e. data with structure and semantics
- Induction: making hypothesises (knowledge) about the scope of models in order to make deductions
- Abduction: assessing hypothesises (knowledge) supporting inductions
- Intuition: knowledge attained directly from observation and/or experience without supporting models (information)
Reasoning patterns can thus be built from predicates and clauses derived from models (powertypes, subtypes, etc.).
Deductive Reasoning
Deductive reasoning is truth-preserving, based on predicates and clauses derived from subsets, powertypes, and universal quantifiers, e.g.:
- isFermented(wine) (derived from universal qualifier)
- isFermented(X) kmUQuantifier_ > isIntoxicating(X)
- wine is intoxicating
Since deductive reasoning is truth-preserving it doesn’t entail judgment and traceability has only to be managed for changes in time-related data. As a corollary, it can be fully carried out at the system level.
Inductive reasoning
Inductive reasoning relies on inferences to improve the explanatory power of models. Like deductive reasoning, it makes use of predicates and clauses derived from subsets, powertypes, and quantifiers. For instance, given deductions about wine and beer being intoxicating:
- subsetOf(wine, beverage) & subsetOf(beer, beverage) (from model)
- isFermented(wine) & isFermented(beer) (derived from universal qualifiers)
- fermented beverages are intoxicating
Inductive reasoning makes assumptions meant to be backed by some knowledge. Yet, when options can be set explicitly and assessed objectively (i.e. without explicit or implicit decisions), inductive reasoning can be carried out at the system level providing full traceability. Otherwise (i.e., if some judgments are involved), inductive reasoning calls for accountability, and consequently organizational context.
Abductive reasoning
Abductive reasoning can be seen as recursive inductive reasoning: whereas inductions looks for explanations, abductions make assumptions (or inferences) about explanations; e.g.:
- The licensing of Red Bull could be borne out because all licensed beverages are intoxicating, some energy drinks are intoxicating, and Red Bull contains caffeine which is intoxicating.
- Alternatively, the argument would lose strength were the licensing of wine and beer not justified by their intoxicating effect but by their degree of alcohol (ABV:>5).
Abductive reasoning is at the core of knowledgeable organization as it weaves into a single knowledge fabric observations (data), models (information), and causal chains (knowledge). That seamless integration can only be achieved using ontologies to combine models with Deep learning and Knowledge graphs technologies.
Intuition
Intuition can be understood as both a limit and a transition:
- As a limit, intuition is defined by the absence of reasoning, a pure judgment worked out without the backing of any kind of model or hypothesis
- As a transition, intuition can also serve as a springboard from implicit to explicit knowledge
The duality of intuition as border of- and leap to reason is of special significance for organizations as it puts intuition at the nexus of their learning processes, where individual creativity can be turned into collective innovation. What may look like a puzzling alchemy depends on collaboration mechanisms gearing intuition, by nature individual, to reasoning, set in individual as well as collective dimensions. To that end, organizations have to put the focus on two aims:
- To harness intuition, collaborations must be rooted in actual observations and experience
- To translate into collective learning, collaborations must be driven by collective purposes
Both objectives can be best achieved by combining learning and decision-making processes.
Learning & Decision-making
Being set on the dynamic edge of enterprises, decision-making should be the focus of collective knowledge and learning. That implies the transparency of means and ends, and the traceability and accountability of decision-making processes.
Knowledge Motifs & Motives
Decision-making processes are best understood through John Boyd’s Observation/Orientation/Decision/Action (OODA) template:
- Observation (data): monitoring and analysis of environments and systems at the digital (e.g., data- and process mining) and enterprise (e.g., business intelligence) levels.
- Orientation (information): assessment of enterprise commitments, objectives, and anticipations based on relevant, and up-to-date models and assumptions.
- Decision (knowledge): judgments made based on commitments and alternative courses of action.
- Action (data again): decisions carried out in a timely and consistent way, and implementation monitored.
These aspects of decision-making can be associated with specific issues and corresponding specific knowledge: uncertainty (observations), causal chains (orientation), risks (decision), and efficiency (action).
Basic paths of the OODA loop can thus be defined depending on the kind of knowledge involved:
- Instant: decisions/judgments are made and carried out directly from observations without further orientation/reasoning
- Conditional: decisions/judgments are conditioned by the outcome and/or the quality of observations, and reliability of the orientation/reasoning
- Unconditional: decisions/judgments are tailored according to the outcome and/or the quality of observations, and reliability of the orientation/reasoning
The mapping of decisions’ motives to knowledge motifs can significantly enhance transparency:
- Traceability, for decisions taken on trustworthy observations, reliable causal chains, and objective assessments (What, How, When)
- Accountability, for decisions that also involve judgments and/or commitments (Why and Who)
That logical and organizational transparency, combined with Artificial intelligence (AI) and Machine learning (ML) technologies, is the key to the morphing of individual learning into collective knowledge.
From Individual Learning to Collective knowledge
When competitive edge is sharpened by innovation, enterprises’ success depends on their ability to fuse individual creativity into collective knowledge. Such organizational learning capability can be represented by a quadrant crossing the kind of agent (people or systems) with the kind of knowledge (implicit or explicit).
Transitions from implicit to explicit knowledge can be achieved at individual and collective levels:
- Between people and organizations, it’s typically done through a mix of experience and collaboration (a)
- Between systems and representations, it’s the nuts and bolts of Machine learning and Knowledge graphs technologies (b)
- Between people and systems, learning relies on the experience feedback achieved through the integration of ML into the OODA loop (c)
- Between organization and systems, learning relies on the functional distinction between judgment, to be carried out at the organizational level, and observation and reasoning, supported by systems (d)
Such organizational integration will ensure:
- Knowledge based collaboration
- Incremental and smooth learning curves
- Learning driven by people experience and feedback from systems
The integration of learning at the enterprise level finds its raison d’être with the accountability (organization) and traceability (systems) of decision-making and reasoning processes.
FURTHER READING
- Caminao Framework Overview
- Modeling Paradigm
- Knowledge Architecture
- EA in bOwls
- Models & Meta-models
- Ontologies & Enterprise Architecture
- Ontologies as Productive Assets
EXTERNAL LINKS
- Stanford Encyclopedia of Philosophy: Abduction
- Stanford Encyclopedia of Philosophy: Deontic Logic
- Stanford Encyclopedia of Philosophy: Combining Logics
- J. Sowa, “Signs and Reality”, Applied ontology 10(3-4):273-284 · December 2015
- Judea Pearl, Dana Mackenzie, “The Book of Why: The New Science of Cause and Effect”, Basic Books (2018)