The recent and decisive wins of Google’s AlphaGo over the world best Go player have been marked as a milestone on the path to general artificial intelligence, one that would be endowed with the same sort of capabilities as its human model. Yet, such assessment may reflect a somewhat mechanical understanding of human intelligence.

What Machines Can Know
As previously noted, human intelligence relies on three categories of knowledge:
- Acquired through senses (views, sounds, smells, touches) or beliefs (as nurtured by our common “sense”). That is by nature prone to circumstances and prejudices.
- Built through reasoning, i.e the mental processing of symbolic representations. It is meant to be universal and open to analysis, but it offers no guarantee for congruence with actual reality.
- Attained through judgment bringing together perceptions, intuitions, and symbolic representations.
Given the exponential growth of their processing power, artificial contraptions are rapidly overtaking human beings on account of perceptions and reasoning capabilities. Moreover, as demonstrated by the stunning success of AlphaGo, they may, sooner rather than later, take the upper hand for judgments based on fixed sets (including empty ones) of symbolic representations. Would that means game over for humans ?
Maybe not, as suggested by the protracted progresses of IBM’s Watson for Oncology which may come as a marker of AI limits when non-zero-sum games are concerned. And there is good reason for that: human intelligence has evolved against survival stakes, not for games sake, and its innate purpose is to make fateful decisions when faced with unpredictable prospects: while machines look for pointless wins, humans aim for meaningful victories
What Animals Can Win
Left to their own, games are meant to be pointless: winning or losing is not to affect players in their otherwise worldly affairs. As a corollary, games intelligence can be disembodied, i.e detached from murky perceptions and freed from fuzzy down-to-earth rules. That’s not the case for real-life contests, especially the ones that drove the development of animal brains aeons ago; then, the constitutive and formative origins of intelligence were to rely on senses without sensors, reason without logic, and judgment without philosophy. The difference with gaming machines is therefore not so much about stakes as about the nature of built-in capabilities: animal intelligence has emerged from the need to focus on actual situations and immediate decision-making without the paraphernalia of science and technology. And since survival is by nature individual, the exercise of animal intelligence is intrinsically singular, otherwise (i.e were the outcomes been uniform) there could have been no selection. As far as animal intelligence is concerned opponents can only be enemies and winners are guaranteed to take all the spoils: no universal reason should be expected.
So, animal intelligence adds survival considerations to the artificial one, but it lacks symbolic and cooperative dimensions.
How Humans Can Win
Given its unique symbolic capability, the human species have been granted a decisive superiority in the evolution race. Using symbolic representations to broaden the stakes, take into account externalities, and build strategies for a wider range of possibilities, human intelligence clearly marks the evolutionary edge between human and other species. The combined capabilities to process non symbolic (aka implicit) knowledge and symbolic representations may therefore define the playground for human and artificial intelligence. But that will not take the cooperative dimension into account.
As it happens, the ability to process symbolic representations has a compound impact on human intelligence by bringing about a qualitative leap not only with regard to knowledge but, perhaps even more critically, with regard to cooperation. Taking leaves from R. Wright, and G. Lakoff, such breakthrough would not be about problem solving but about social destiny: what characterizes human intelligence would be an ability to assess collective aims and consequently to build non-zero-sum strategies bringing shared benefits.
Back to the general artificial intelligence project, the real challenge would be to generalize deep learning to non-zero-sum competition and its corollary, namely the combination and valuation of heterogeneous yet shared actual stakes.
However, as pointed by Lee Sedol, “when it comes to human beings, there is a psychological aspect that one has to also think about.” In other words, as noted above), human intelligence has a native and inherent emotional dimension which may be an asset (e.g as a source of creativity) as well as a liability (when it blurs some hazards).
Further Readings
- Projects as non-zero sum games
- AI & Embedded Insanity
- Detour from Turing Game
- Knowledge Architecture
- AlphaGo: From Intuitive Learning to Holistic Knowledge
- Semantic Web: from Things to Memes
External Links
- R. Wright, “Nonzero: The Logic of Human Destiny”
- G. Lakoff, “The Political Mind”
- In a Huge Breakthrough, Google’s AI Beats a Top Player at the Game of Go (Wired)
- Showdown (The Economist)
- Mastering the Game of Go … (Nature)
- Ariana Eunjung Cha, “Building an Artificial Brain”, Washington Post
- Jerry Kaplan, “Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence”. Yale University Press
- The Limits of Watson for Oncology, Statnews, Casey Ross, Ike Swetlitz