Brawn & Brain
Google’s AlphaGo recent success against Europe’s top player at the game of Go is widely recognized as a major breakthrough for Artificial Intelligence (AI), both because of the undertaking (Go is exponentially more complex than Chess) and time (it has occurred much sooner than expected). As it happened, the leap can be credited as much to brawn as to brain, the former with a massive increase in computing power, the latter with an innovative combination of established algorithms.
That breakthrough and the way it has been achieved may seem to draw opposite perspectives about the future of IA: either the current conceptual framework is the best option, with brawny machines becoming brainier and, sooner or later, will be able to leap over the qualitative gap with their human makers; or it’s a quantitative delusion that could drive brawnier machines and helpless humans down into that very same hole.
Could AlphaGo and its DeepMind makers may point to a holistic bypass around that dilemma ?
Taxonomy of Sources
Taking a leaf from Spinoza, one could begin by considering the categories of knowledge with regard to sources:
- The first category is achieved through our senses (views, sounds, smells, touches) or beliefs (as nurtured by our common “sense”). This category is by nature prone to circumstances and prejudices.
- The second is built through reasoning, i.e the mental processing of symbolic representations. It is meant to be universal and open to analysis, but it offers no guarantee for congruence with actual reality.
- The third is attained through philosophy which is by essence meant to bring together perceptions, intuitions, and symbolic representations.
Whereas there can’t be much controversy about the first ones, the third category leaves room for a wide range of philosophical tenets, from religion to science, collective ideologies, or spiritual transcendence. With today’s knowledge spread across smart devices and driven by the wisdom of crowds, philosophy seems to look more at big data than at big brother.
Despite (or because of) its focus on the second category, AlphaGo and its architectural’s feat may still carry some lessons for the whole endeavor.
Taxonomy of Representations
As already noted, the effectiveness of IA’s supporting paradigms has been bolstered by the exponential increase in available data and the processing power to deal with it. Not surprisingly, those paradigms are associated with two basic forms of representations aligned with the source of knowledge, implicit for senses, and explicit for reasoning:
- Designs based on symbolic representations allow for explicit information processing: data is “interpreted” into information which is then put to use as knowledge governing behaviors.
- Designs based on neural networks are characterized by implicit information processing: data is “compiled” into neural connections whose weights (pondering knowledge ) are tuned iteratively on the basis of behavioral feedback.
Since that duality mirrors human cognitive capabilities, brainy machines built on those designs are meant to combine rationality with effectiveness:
- Symbolic representations support the transparency of ends and the traceability of means, allowing for hierarchies of purposes, actual or social.
- Neural networks, helped by their learning kernels operating directly on data, speed up the realization of concrete purposes based on the supporting knowledge implicitly embodied as weighted connections.
The potential of such approaches have been illustrated by internet-based language processing: pragmatic associations “observed” on billions of discourses are progressively complementing and even superseding syntactic and semantic rules in web-based parsers.
On that point too AlphaGo has focused ambitions since it only deals with non symbolic inputs, namely a collection of Go moves (about 30 million in total) from expert players. But that limit can be turned into a benefit as it brings homogeneity and transparency, and therefore a more effective combination of algorithms: brawny ones for actual moves and intuitive knowledge from the best players, brainy ones for putative moves, planning, and policies.
Teaching them how to work together is arguably a key factor of the breakthrough.
Taxonomy of Learning
As should be expected from intelligent machines, their impressive track record fully depends of their learning capabilities. Whereas those capabilities are typically applied separately to implicit (or non symbolic) and explicit (or symbolic) contents, bringing them under the control of the same cognitive engine, as humans brains routinely do, has long been recognized as a primary objective for IA.
Practically that has been achieved with neural networks by combining supervised and unsupervised learning: human experts help systems to sort the wheat from the chaff and then let them improve their expertise through millions of self-play.
Yet, the achievements of leading AI players have marked out the limits of these solutions, namely the qualitative gap between playing as the best human players and beating them. While the former outcome can be achieved through likelihood-based decision-making, the latter requires the development of original schemes, and that brings quantitative and qualitative obstacles:
- Contrary to actual moves, possible ones have no limit, hence the exponential increase in search trees.
- Original schemes are to be devised with regard to values and policies.
Overcoming both challenges with a single scheme may be seen as the critical achievement of DeepMind engineers.
Mastering the Breadth & Depth of Search Trees
Using neural networks for the evaluation of actual states as well as the sampling of policies comes with exponential increases in breath and depth of search trees. Whereas Monte Carlo Tree Search (MCTS) algorithms are meant to deal with the problem, limited capacity to scale up the processing power will nonetheless lead to shallow trees; until DeepMind engineers succeeded in unlocking the depth barrier by applying MCTS to layered value and policy networks.
AlphaGo seamless use of layered networks (aka Deep Convolutional Neural Networks) for intuitive learning, reinforcement, values, and policies was made possible by the homogeneity of Go’s playground and rules (no differentiated moves and search traps as in the game of Chess).
From Intuition to Knowledge
Humans are the only species that combines intuitive (implicit) and symbolic (explicit) knowledge, with the dual capacity to transform the former into the latter and in reverse to improve the former with the latter’s feedback.
Applied to machine learning that would require some continuity between supervised and unsupervised learning which would be achieved with neural networks being used for symbolic representations as well as for raw data:
- From explicit to implicit: symbolic descriptions built for specific contexts and purposes would be engineered into neural networks to be tried and improved by running them on data from targeted environments.
- From implicit to explicit: once designs tested and reinforced through millions of runs in relevant targets, it would be possible to re-engineer the results into improved symbolic descriptions.
Whereas unsupervised learning of deep symbolic knowledge remains beyond the reach of intelligent machines, significant results can be achieved for “flat” semantic playground, i.e if the same semantics can be used to evaluate states and policies across networks:
- Supervised learning of the intuitive part of the game as observed in millions of moves by human experts.
- Unsupervised reinforcement learning from games of self-play.
- Planning and decision-making using Monte Carlo Tree Search (MCTS) methods to build, assess, and refine its own strategies.
Such deep and seamless integration would not be possible without the holistic nature of the game of Go.
Aesthetics Assessment & Holistic Knowledge
The specificity of the game of Go is twofold, complexity on the quantitative side, simplicity on the qualitative side, the former being the price of the latter.
As compared to Chess, Go’s actual positions and prospective moves can only be assessed on the whole of the board, using a criterion that is best defined as aesthetic as it cannot be reduced to any metrics or handcrafted expert rules. Players will not make moves after a detailed analysis of local positions and assessment of alternative scenarii, but will follow their intuitive perception of the board.
As a consequence, the behavior of AlphaGo can be neatly and fully bound with the second level of knowledge defined above:
- As a game player it can be detached from actual reality concerns.
- As a Go player it doesn’t have to tackle any semantic complexity.
Given a fitted harness of adequate computing power, the primary challenge of DeepMind engineers is to teach AlphaGo to transform its aesthetic intuitions into holistic knowledge without having to define their substance.
- AlphaGo & Non-zero-sum Contests
- AI & Embedded Insanity
- Detour from Turing Game
- Knowledge Architecture
- Sifting through a web of things
- Semantic Web: from Things to Memes
- Reinventing the wheel