Content discovery and the game of Go can be used to illustrate the strengths and limits of artificial intelligence.

Game of Go: Closed Ground, Non Semantic Charts
The conclusive successes of Google’s AlphaGo against world’s best players are best understood when related to the characteristics of the game of Go:
- Contrary to real life competitions, games are set on closed and standalone playgrounds detached from actual concerns. As a consequence players (human or artificial) can factor out emotions from cognitive behaviors.
- Contrary to games like Chess, Go’s playground is uniform and can be mapped without semantic distinctions for situations or moves. Whereas symbolic knowledge, explicit or otherwise, is still required for good performances, excellence can only be achieved through holistic assessments based on intuition and implicit knowledge.
Both characteristics fully play to the strengths of AI, in particular computing power (to explore playground and alternative strategies) and detachment (when decisions have to be taken).
Content Discovery: Open Grounds, Semantic Charts
Content discovery platforms like Outbrain or Taboola are meant to suggest further (commercial) bearings to online users. Compared to the game of Go, that mission clearly goes in the opposite direction:
- Channels may be virtual but users are humans, with real emotions and concerns. And they are offered proxy grounds not so much to be explored than to be endlessly redefined and made more alluring.
- Online strolls may be aimless and discoveries fortuitous, but if content discovery devices are to underwrite themselves, they must bring potential customers along monetized paths. Hence the hitch: artificial brains need some cues about what readers have in mind.
That makes content discovery a challenging task for artificial coaches as they have to usher wanderers with idiosyncratic but unknown motivations through boundless expanses of symbolic shopping fields.
What Would Eliza Say
When AI was still about human thinking Alan Turing thought of a test that could check the ability of a machine to exhibit intelligent behaviors. As it was then, available computing power was several orders of magnitude below today’s capacities, so the test was not about intelligence itself, but with the ability to conduct text-based dialogues equivalent to, or indistinguishable from, that of a human. That approach was famously illustrated by Eliza, a software able to beguile humans in conversations without any understanding of their meanings.
More than half a century later, here are some suggestions of leading content discovery engines:
- After reading about the Ecuador quake or Syrian rebels one is supposed to be interested by 8 tips to keep our liver healthy, or 20 reasons of unsuccessful attempts at losing weight.
- After reading about growing coffee in Ethiopia one is supposed to be interested by the mansions of world billionaires, or a Shepard pup surviving after being lost at sea for a month.
It’s safe to assume that both would have flunked the Turing Test.
Further Reading
- Sifting through a web of things
- Semantic Web: from Things to Memes
- Stories In The Logosphere
- AI & Embedded Insanity
- Detour from Turing Game
- Selfies & augmented Identities
- AlphaGo: From Intuitive Learning to Holistic Knowledge
- AlphaGo & Non-Zero-Sum Contests
I enjoyed reading this article. IT builds on vast knowledge of a number of fields of study and research. Good to know that Turing Test does not really test intelligence. I need to know more about closed ground and semantic charts.
Best wishes for progress in your research.