Sometimes the future is best seen through rear-view mirrors; given the advances of artificial intelligence (AI) in 2016, hindsight may help for the year to come.
Deep Learning & the Depths of Intelligence
Deep learning may not have been discovered in 2016 but Google’s AlphaGo has arguably brought a new dimension to artificial intelligence, something to be compared to unearthing the spherical Earth.
As should be expected for machines capabilities, artificial intelligence has for long been fettered by technological handcuffs; so much so that expert systems were initially confined to a flat earth of knowledge to be explored through cumbersome sets of explicit rules. But exponential increase in computing power has allowed neural networks to take a bottom-up perspective, mining for implicit knowledge hidden in large amount of raw data.
Like digging tunnels from both extremities, it took some time to bring together top-down and bottom-up schemes, namely explicit (rule-based) and implicit (neural network-based) knowledge processing. But now that it comes to fruition, the alignment of perspectives puts a new light on the cognitive and social dimensions of intelligence.
Intelligence as a Cognitive Capability
Assuming that intelligence is best defined as the ability to solve problems, the first criterion to consider is the type of input (aka knowledge) to be used:
- Explicit: rational processing of symbolic representations of contexts, concerns, objectives, and policies.
- Implicit: intuitive processing of factual (non symbolic) observations of objects and phenomena.
That distinction is broadly consistent with the one between humans, seen as the sole symbolic species with the ability to reason about explicit knowledge, and other animal species which, despite being limited to the processing of implicit knowledge, may be far better at it than humans. Along that understanding, it would be safe to assume that systems with enough computing power will sooner or later be able to better the best of animal species, in particular in the case of imperfect inputs.
Intelligence as a Social Capability
Alongside the type of inputs, the second criterion to be considered is obviously the type of output (aka solution). And since classifications are meant to be built on purpose, a typology of AI outcomes should focus on relationships between agents, humans or otherwise:
- Self-contained: problem-solving situations without opponent.
- Competitive: zero-sum conflictual activities involving one or more intelligent opponents.
- Collaborative: non-zero-sum activities involving one or more intelligent agents.
That classification coincides with two basic divides regarding communication and social behaviors:
- To begin with, human behavior is critically different when interacting with living species (humans or animals) and machines (dumb or smart). In that case the primary factor governing intelligence is the presence, real or supposed, of beings with intentions.
- Then, and only then, communication may take different forms depending on languages. In that case the primary factor governing intelligence is the ability to share symbolic representations.
A taxonomy of intelligence with regard to cognitive (reason vs intuition) and social (symbolic vs non-symbolic) capabilities may help to clarify the role of AI and the importance of deep learning.
Between Intuition and Reason
Google’s AlphaGo astonishing performances have been rightly explained by a qualitative breakthrough in learning capabilities, itself enabled by the two quantitative factors of big data and computing power. But beyond that success, DeepMind (AlphaGo’s maker) may have pioneered a new approach to intelligence by harnessing both symbolic and non symbolic knowledge to the benefit of a renewed rationality.
Perhaps surprisingly, intelligence (a capability) and reason (a tool) may turn into uneasy bedfellows when the former is meant to include intuition while the latter is identified with logic. As it happens, merging intuitive and reasoned knowledge can be seen as the nexus of AlphaGo decisive breakthrough, as it replaces abrasive interfaces with smart full-duplex neural networks.
Intelligent devices can now process knowledge seamlessly back and forth, left and right: borne by DeepMind’s smooth cognitive cogwheels, learning from factual observations can suggest or reinforce the symbolic representation of emerging structures and behaviors, and in return symbolic representations can be used to guide big data mining.
From consumers behaviors to social networks to business marketing to supporting systems, the benefits of bridging the gap between observed phenomena and explicit causalities appear to be boundless.
- AlphaGo: From Intuitive Learning to Holistic Knowledge
- AlphaGo & Non-Zero-Sum Contests
- Agile Collaboration & Social Creativity
- Thread: Systems, Information, Knowledge
- New Year: Regrets or Expectations ?
- Operational Intelligence & Decision Making
- Data Mining & Requirements Analysis
- Business Agility vs Systems Entropy
One thought on “2017: What Did You Learn Last Year ?”
Hi, I’ve enjoyed tracking your blog for a while now. I think you are very on track with your observations on this post. My opinion is that there is a newer revolution in process that will be the basis for a new way of approaching this subject, but it needs a distinction between AI (Artificial Intelligence as it has been come to be known and discussed where the machine does the “thinking”) and more what I think is the realm of AI as in Augmented Intelligence.
I hope to be able to have a more intelligent discussion about that in the new year. Symbiotic intelligence between humans and machines that utilize joint “sense-making” which is required for effective utilization and smart applications. (Or so I think.) The Borg is us, and they’re coming soon to a space near everyone.
A little dangerous, but a lot useful. Something to be thought about and considered in this realm.