Excerpt from Enterprise Architecture Fundamentals:
In some ways, artificial brains have taken longer to mature than their human counterparts, lingering in infancy for almost half a century before their abrupt and sweeping development. With hindsight, it’s easy to understand why computers have been late bloomers: they could be taught but they couldn’t learn. Hindsight also provides the twofold explanation of the growth spurt: a direct and wide-ranging exposure to environments combined with enough neurons to cope with massive amounts of rough data.
Two technologies are behind the AI breakthrough: Knowledge graphs (KG) and Deep learning (DL).
Behind the variety of names for Knowledge graphs (e.g., conceptual graphs, semantic networks) and employs for them is an all-purpose, graph-based representation built from nodes, properties, and connectors (cf. chapter 8). Yet, the expansion of KG is less the result of technological advances than a combination of:
- Graphs’ proven track record for the actionable representation of complex issues
- KG’s natural implementation as neural networks
- KG’s universality in representing almost any kind of knowledge
By contrast, the upsurge in Deep-learning applications is mainly due to technological leaps, which can be summarily described in three steps:
- Traditional Machine learning (ML): symbolic representations and rules reproduce human expertise
- Supervised Deep learning (DL): neural networks reproduce cognitive processes and are trained with profiled data samples
- Reinforcement learning (RL): extensive raw inputs and statistical inference replace profiled samples for the training of neural networks
Successes are compounded by the synergies engendered by interoperability ― both KG and DL technologies are driven by neural networks ― and complementarity ― the former handling explicit knowledge, and the latter, implicit.
In-depth synergies between KG and DL technologies can be demonstrated by their use in natural-language interfaces (figure 16-1).
Figure 16-1. Language & Knowledge
On the one side, natural languages use pragmatics to weave together syntax, lexicon, and semantics (cf. chapter 9). On the other side, Knowledge graphs produce knowledge from models and profiles, and from Deep learning. In between, pragmatics materialize the overlapping of language and knowledge, and the flux between implicit and explicit contents.