An often overlooked benefit of artificial intelligence has been a renewed interest in seminal philosophical and cognitive topics; ontologies coming top of the list.
Yet that interest has often been led astray by misguided perspectives, in particular:
- Universality: one-fits-all approaches are pointless if not self-defeating considering that ontologies are meant to target specific domains of concerns.
- Implementation: the focus is usually put on representation schemes (commonly known as Resource Description Frameworks, or RDFs), instead of the nature of targeted knowledge and the associated cognitive capabilities.
Those misconceptions, often combined, may explain the limited practical inroads of ontologies. Conversely, they also point to ontologies’ wherewithal for enterprises immersed into boundless and fluctuating knowledge-driven business environments.
Ontologies as Assets
Whatever the name of the matter (data, information or knowledge), there isn’t much argument about its primacy for business competitiveness; insofar as enterprises are concerned knowledge is recognized as a key asset, as valuable if not more than financial ones, and should be managed accordingly. Pushing the comparison still further, data would be likened to liquidity, information to fixed income investment, and knowledge to capital ventures. To summarize, assets whatever their nature lose value when left asleep and bear fruits when kept awake; that’s doubly the case for data and information:
- Digitized business flows accelerates data obsolescence and makes it continuous.
- Shifting and porous enterprises boundaries and markets segments call for constant updates and adjustments of enterprise information models.
But assessing the business value of knowledge has always been a matter of intuition rather than accounting, even when it can be patented; and most of knowledge shapes up well beyond regulatory reach. Nonetheless, knowledge is not manna from heaven but the outcome of information processing, so assessing the capabilities of such processes could help.
Admittedly, traditional modeling methods are too stringent for that purpose, and looser schemes are needed to accommodate the open range of business contexts and concerns; as already expounded, that’s precisely what ontologies are meant to do, e.g:
- Systems modeling, with a focus on integration, e.g Zachman Framework.
- Classifications, with a focus on range, e.g Dewey Decimal System.
- Conceptual models, with a focus on understanding, e.g legislation.
- Knowledge management, with a focus on reasoning, e.g semantic web.
And ontologies can do more than bringing under a single roof the whole of enterprise knowledge representations: they can also be used to nurture and crossbreed symbolic assets and develop innovative ones.
Knowledge is best understood as information put to use; accounting rules may be disputed but there is no argument about the benefits of a canny combination of information, circumstances, and purpose. Nonetheless, assessing knowledge returns is hampered by the lack of traceability: if a part of knowledge is explicit and subject to symbolic representation, another is implicit and manifests itself only through actual behaviors. At philosophical level it’s the line drawn by Wittgenstein: “The limits of my language mean the limits of my world”; at technical level it’s AI’s two-lanes approach: symbolic rule-based engines vs non symbolic neural networks; at corporate level implicit knowledge is seen as some unaccounted for aspect of intangible assets when not simply blended into corporate culture. With knowledge becoming a primary success factor, a more reasoned approach of its processing is clearly needed.
To begin with, symbolic knowledge can be plied by logic, which, quoting Wittgenstein again, “takes care of itself; all we have to do is to look and see how it does it.” That would be true on two conditions:
- Domains are to be well circumscribed.
- A water-tight partition must be secured between the logic of representations and the semantics of domains.
That could be achieved with modular and specific ontologies built on a clear distinction between common representation syntax and specific domains semantics.
As for non-symbolic knowledge, its processing has for long been overshadowed by the preeminence of symbolic rule-based schemes, that is until neural networks got the edge and deep learning overturned the playground. In a few years’ time practically unlimited access to raw data and the exponential growth in computing power have opened the door to massive sources of unexplored knowledge which is paradoxically both directly relevant yet devoid of immediate meaning:
- Relevance: mined raw data is supposed to reflect the geology and dynamics of targeted markets.
- Meaning: the main value of that knowledge rests on its implicit nature; applying existing semantics would add little to existing knowledge.
Assuming that deep learning can transmute raw base metals into knowledge gold, enterprises would need to understand, assess, and improve the refining machinery. That could be done with ontological frames.
A Proof of Concept
Compared to tangible assets knowledge may appear as very elusive, yet, and contrary to intangible ones, knowledge is best understood as the outcome of processes that can be properly designed, assessed, and improved. And that can be achieved with profiled ontologies.
As a Proof of Concept, an ontological kernel has been developed along two principles:
- A clear-cut distinction between truth-preserving representation and domain specific semantics.
- Profiled ontologies designed according to the nature of contents (concepts, documents, or artifacts), layers (environment, enterprise, systems, platforms), and contexts (institutional, professional, corporate, social.
That provides for a seamless integration of information processing, from data mining to knowledge management and decision making:
- Data is first captured through aspects.
- Categories are used to process data into information on one hand, design production systems on the other hand.
- Concepts serve as bridges to knowledgeable information.
A beta version is available for comments on the Stanford/Protégé portal with the link: Caminao Ontological Kernel (CaKe).
- Knowledge Architecture
- Conceptual Models & Abstraction Scales
- Models & Meta-models
- Ontologies & Models
- Ontologies & EA
- Open Concepts
- Open Concepts Will Make You Free
- Conceptual Thesaurus: Overview
- Conceptual Thesaurus: Typical Use Cases
- AlphaGo & Non-Zero-Sum Contests
- AlphaGo: From Intuitive Learning to Holistic Knowledge
- 2018: Clones vs Octopuses