2018: Clones vs Octopuses

In the footsteps of robots replacing workmen, deep learning bots look to boot out knowledge workers overwhelmed by muddy data.

Cloning Knowledge (Tadeusz Cantor, from “The Dead Class”)

Faced with that , should humans try to learn deeper and faster than clones, or should they learn from octopuses and their smart hands.

Machine Learning & The Economics of Clones

As illustrated by scan-reading AI machines, the spreading of learning AI technology in every nook and cranny introduces something like an exponential multiplier: compared to the power-loom of the Industrial Revolution which substituted machines for workers, deep learning is substituting replicators for machines; and contrary to power looms, there is no physical limitation on the number of smart clones that can be deployed. So, however fast and deep humans can learn, clones are much too prolific: it’s a no-win situation. To get out of that conundrum humans have to put their hand on a competitive edge, e.g some kind of knowledge that cannot be cloned.

Knowledge & Competition

Appraising humans learning sway over machines, one can take from Spinoza’s categories of knowledge with regard to sources:

  1. Senses (views, sounds, smells, touches) or beliefs (as nurtured by the supposed common “sense”). Artificial sensors can compete with human ones, and smart machines are much better if prejudiced beliefs are put into the equation.
  2. Reasoning, i.e the mental processing of symbolic representations. As demonstrated by AlphaGo, machines are bound to fast extend their competitive edge.
  3. Philosophy which is by essence meant to bring together perceptions, intuitions, and symbolic representations. That’s where human intelligence could beat its artificial cousin which is clueless when purposes are needed.

That assessment is bore out by evolution: the absolute dominance established by humans over other animal species comes from their use of knowledge, which can be summarized as:

  1. Use of symbolic representations.
  2. Ability to formulate and exchange representations of contexts, concerns, and policies.
  3. Ability to agree on stakes and cooperate on policies.

On that basis, the third dimension, i.e the use of symbolic knowledge to cooperate on non-zero-sum endeavors, can be used to draw the demarcation line between human and artificial intelligence:

  • Paths and paces of pursuits as part and parcel of the knowledge itself. The fact that both are mostly obviated by search engines gives humans some edge.
  • Operational knowledge is best understood as information put to use, and must include concerns and decision-making. But smart bots’ ubiquity and capabilities often sap information traceability and decisions transparency, which makes room for humans to prevail.

So humans can find a clear competitive edge in this knowledge dimension because it relies on a combination of experience and thinking and is therefore hard to clone. Organizations should make sure that’s where smart systems take back and humans take up.

Organization & Innovation

Innovation being at the root of competitive edge, understanding the role played by smart systems is a key success factor; that is to be defined by organization.

As epitomized by Henry Ford, industrial-era thinking associated innovation with top-down management and the specialization of execution:

  • At execution level manual tasks were to be fragmented and specialized.
  • At management level analysis and decision-making were to be centralized and abstracted.

That organizational paradigm puts a double restraint on innovation:

  • On execution side the fragmentation of manual tasks prevents workers from effectively assessing and improving their performances.
  • On management side knowledge is kept in conceptual boxes and bereft of feedback from actual uses.

That railing between smart brains and dumb hands may have worked well enough for manufacturing processes limited to material flows and subject to circumscribed and predictable technological changes. It didn’t last.

First, as such hierarchies necessarily grow with processes complexity, overheads and rigidity force repeated pruning. Then, flat hierarchies are of limited use when information flows are to be combined with material ones, so enterprises have to start with matrix organization. Finally, with the seamless integration of digital and material flows, perpetuating the traditional line between management and execution is bound to hamstring innovation:

  • Smart tools may be able to perform a wide range of physical tasks without human supervision, but the core of innovation core as well as its front lines are where human and machines collaborate in processing a mix of material and information flows, both learning from the experience.
  • Hierarchies and centralized decision-making are being cut out from feeders when set in networked business environments colonized by smart bots on both sides of corporate boundaries.

Not surprisingly, these innovation trends seem to tally with the social dimension of knowledge.

Learning from the Octopus

The AI revolution has already broken all historical records of footprint (everything is affected) and speed (a matter of years). Given the length of human education cycles, appraising the consequences comes with some urgency, beginning with the disposal of two entrenched beliefs:

At individual level the new paradigm could be compared to the nervous system of octopuses: each arm gets its brain and neurons, and so its own touch of knowledge and taste of decision-making.

On a broader (i.e enterprise) perspective, knowledge should be supported by two organizational layers, one direct and innovation-driven between trusted co-workers, the other networked and knowledge-driven between remote workers, trusted or otherwise.

Postscript

Further Reading

External Links

Transcription & Deep Learning

Humans looking for reassurance against the encroachment of artificial brains should try YouTube subtitles: whatever Google’s track record in natural language processing, the way its automated scribe writes down what is said in the movies is essentially useless.

A blank sheet of paper was copied on a Xerox machine.
This copy was used to make a second copy.
The second to make a third one, and so on…
Each copy as it came out of the machine was re-used to make the next.
This was continued for one hundred times, producing a book of one hundred pages. (Ian Burn)

Experience directly points to the probable cause of failure: the usefulness of real-time transcriptions is not a linear function of accuracy because every slip can be fatal, without backup or second chance. It’s like walking a line: for all practical purposes a single misunderstanding can throw away the thread of understanding, without a chance of retrieve or reprieve.

Contrary to Turing machines, listeners have no finite states; and contrary to the sequence of symbols on tapes, tales are told by weaving together semantic threads. It ensues that stories are work in progress: readers can pause to review and consolidate meanings, but listeners have no other choice than punting on what comes to they mind, hopping that the fabric of the story will carry them out.

So, whereas automated scribes can deep learn from written texts and recorded conversations, there is no way to do the same from what listeners understand. That’s the beauty of story telling: words may be written but meanings are renewed each time the words are heard.

Further Reading

2017: What Did You Learn Last Year ?

Sometimes the future is best seen through rear-view mirrors; given the advances of artificial intelligence (AI) in 2016, hindsight may help for the year to come.

(J.Bosh)
Deep Mind Learning (J.Bosh)

Deep Learning & the Depths of Intelligence

Deep learning may not have been discovered in 2016 but Google’s AlphaGo has arguably brought a new dimension to artificial intelligence, something to be compared to unearthing the spherical Earth.

As should be expected for machines capabilities, artificial intelligence has for long been fettered by technological handcuffs; so much so that expert systems were initially confined to a flat earth of knowledge to be explored through cumbersome sets of explicit rules. But exponential increase in computing power has allowed neural networks to take a bottom-up perspective, mining for implicit knowledge hidden in large amount of raw data.

Like digging tunnels from both extremities, it took some time to bring together top-down and bottom-up schemes, namely explicit (rule-based) and implicit (neural network-based) knowledge processing. But now that it comes to fruition, the alignment of perspectives puts a new light on the cognitive and social dimensions of intelligence.

Intelligence as a Cognitive Capability

Assuming that intelligence is best defined as the ability to solve problems, the first criterion to consider is the type of input (aka knowledge) to be used:

  • Explicit: rational processing of symbolic representations of contexts, concerns, objectives, and policies.
  • Implicit: intuitive processing of factual (non symbolic) observations of objects and phenomena.

That distinction is broadly consistent with the one between humans, seen as the sole symbolic species with the ability to reason about explicit knowledge, and other animal species which, despite being limited to the processing of implicit knowledge, may be far better at it than humans. Along that understanding, it would be safe to assume that systems with enough computing power will sooner or later be able to better the best of animal species, in particular in the case of imperfect inputs.

Intelligence as a Social Capability

Alongside the type of inputs, the second criterion to be considered is obviously the type of output (aka solution). And since classifications are meant to be built on purpose, a typology of AI outcomes should focus on relationships between agents, humans or otherwise:

  • Self-contained: problem-solving situations without opponent.
  • Competitive: zero-sum conflictual activities involving one or more intelligent opponents.
  • Collaborative: non-zero-sum activities involving one or more intelligent agents.

That classification coincides with two basic divides regarding communication and social behaviors:

  1. To begin with, human behavior is critically different when interacting with living species (humans or animals) and machines (dumb or smart). In that case the primary factor governing intelligence is the presence, real or supposed, of beings with intentions.
  2. Then, and only then, communication may take different forms depending on languages. In that case the primary factor governing intelligence is the ability to share symbolic representations.

A taxonomy of intelligence with regard to cognitive (reason vs intuition) and social (symbolic vs non-symbolic) capabilities may help to clarify the role of AI and the importance of deep learning.

Between Intuition and Reason

Google’s AlphaGo astonishing performances have been rightly explained by a qualitative breakthrough in learning capabilities, itself enabled by the two quantitative factors of big data and computing power. But beyond that success, DeepMind (AlphaGo’s maker) may have pioneered a new approach to intelligence by harnessing both symbolic and non symbolic knowledge to the benefit of a renewed rationality.

Perhaps surprisingly, intelligence (a capability) and reason (a tool) may turn into uneasy bedfellows when the former is meant to include intuition while the latter is identified with logic. As it happens, merging intuitive and reasoned knowledge can be seen as the nexus of AlphaGo decisive breakthrough, as it replaces abrasive interfaces with smart full-duplex neural networks.

Intelligent devices can now process knowledge seamlessly back and forth, left and right: borne by DeepMind’s smooth cognitive cogwheels, learning from factual observations can suggest or reinforce the symbolic representation of emerging structures and behaviors, and in return symbolic representations can be used to guide big data mining.

From consumers behaviors to social networks to business marketing to supporting systems, the benefits of bridging the gap between observed phenomena and explicit causalities appear to be boundless.

Further Reading

External Links

AlphaGo & Non-Zero-Sum Contests

The recent and decisive wins of Google’s AlphaGo over the world best Go player have been marked as a milestone on the path to general artificial intelligence, one that would be endowed with the same sort of capabilities as its human model. Yet, such assessment may reflect a somewhat mechanical understanding of human intelligence.

Zero-sum contest (Uccello)

What Machines Can Know

As previously noted, human intelligence relies on three categories of knowledge:

  1. Acquired through senses (views, sounds, smells, touches) or beliefs (as nurtured by our common “sense”). That is by nature prone to circumstances and prejudices.
  2. Built through reasoning, i.e the mental processing of symbolic representations. It is meant to be universal and open to analysis, but it offers no guarantee for congruence with actual reality.
  3. Attained through judgment bringing together perceptions, intuitions, and symbolic representations.

Given the exponential growth of their processing power, artificial contraptions are rapidly overtaking human beings on account of perceptions and reasoning capabilities. Moreover, as demonstrated by the stunning success of AlphaGo, they may, sooner rather than later, take the upper hand for judgments based on fixed sets (including empty ones) of symbolic representations. Would that means game over for humans ?

Maybe not, as suggested by the protracted progresses of IBM’s Watson for Oncology which may come as a marker of AI limits when non-zero-sum games are concerned. And there is good reason for that: human intelligence has evolved against survival stakes, not for games sake, and its innate purpose is to make fateful decisions when faced with unpredictable prospects: while machines look for pointless wins, humans aim for meaningful victories

What Animals Can Win

Left to their own, games are meant to be pointless: winning or losing is not to affect players in their otherwise worldly affairs. As a corollary, games intelligence can be disembodied, i.e detached from murky perceptions and freed from fuzzy down-to-earth rules. That’s not the case for real-life contests, especially the ones that drove the development of animal brains aeons ago; then, the constitutive and formative origins of intelligence were to rely on senses without sensors, reason without logic, and judgment without philosophy. The difference with gaming machines is therefore not so much about stakes as about the nature of built-in capabilities: animal intelligence has emerged from the need to focus on actual situations and immediate decision-making without the paraphernalia of science and technology. And since survival is by nature individual, the exercise of animal intelligence is intrinsically singular, otherwise (i.e were the outcomes been uniform) there could have been no selection. As far as animal intelligence is concerned opponents can only be enemies and winners are guaranteed to take all the spoils: no universal reason should be expected.

So, animal intelligence adds survival considerations to the artificial one, but it lacks symbolic and cooperative dimensions.

How Humans Can Win

Given its unique symbolic capability, the human species have been granted a decisive superiority in the evolution race. Using symbolic representations to broaden the stakes, take into account externalities, and build strategies for a wider range of possibilities, human intelligence clearly marks the evolutionary edge between human and other species. The combined capabilities to process non symbolic (aka implicit) knowledge and symbolic representations may therefore define the playground for human and artificial intelligence. But that will not take the cooperative dimension into account.

As it happens, the ability to process symbolic representations has a compound impact on human intelligence by bringing about a qualitative leap not only with regard to knowledge but, perhaps even more critically, with regard to cooperation. Taking leaves from R. Wright, and G. Lakoff, such breakthrough would not be about problem solving but about social destiny: what characterizes human intelligence would be an ability to assess collective aims and consequently to build non-zero-sum strategies bringing shared benefits.

Back to the general artificial intelligence project, the real challenge would be to generalize deep learning to non-zero-sum competition and its corollary, namely the combination and valuation of heterogeneous yet shared actual stakes.

However, as pointed by Lee Sedol, “when it comes to human beings, there is a psychological aspect that one has to also think about.” In other words, as noted above), human intelligence has a native and inherent emotional dimension which may be an asset (e.g as a source of creativity) as well as a liability (when it blurs some hazards).

Further Readings

External Links

Agile Business Analysis: From Wonders to Logic

Time and again new recruits will ask about the role of business analysts. Considering that such a question is seldom heard from software engineers, are BAs more curious about their job, or are they standing on more tentative grounds ? If that’s the case agility would help them to flip-flop between business quicksands to systems hard rocks.

vvv
How to make sense of business wonders (Hieronymus Bosch)

Holding the fort vs scouting outskirts

Systems architects and software engineers may have to meet esoteric business requirements, but their responsibility is first and foremost to guarantee the functional and economic sustainability of systems. On that account they are given licence to build solid walls and secure gateways, and to enforce their own languages and rules upon well vetted parties.

Business analysts don’t get such a free hand: while being straitened by software engineers constructs and constraints, their primary undertaking is to explore business wilds, reconnoitre competitors, trace new tracks, and learn the dialects of any nicknamed natives ready to trade.

No wonder the qualms of new business analysts.

Great businesses make their own rules

The best rules in business are the ones still unbeknownst, as success is most often brought by disruptive initiatives taking advantage of previously undiscovered opportunities. It ensues that at its core, BAs’ job description is to relentlessly look across the frontier for still uncharted businesses, and bring them back to the digitized world of shipshape business domains and processes.

For that purpose BAs will have to juggle with the fuzzy idiosyncrasies of new business openings until they can be aligned with the functionalities of “legacy” systems.

BA’s Agility

While usually presented as a software engineering hallmark, agility may be equally useful for business analysts as they have to balance two crossing perspectives:

  • Analysis: sorting detailed activities into business processes.
  • Synthesis: factoring out business functions and mapping them to systems capabilities.

That could be a challenging achievement if carried out sequentially: crossing back and forth between changing scope and steady capabilities could generate unsettling alternatives and unbounded complexity.

The agile development model is meant to tackle the difficulties through iterations and collaboration without being too specific about the kind of agility required from business analysts and software engineers.

Yet the apparent symmetry between the parties may be misleading: whereas software engineers don’t have (and shouldn’t even try) to second guess business analysts, business analysts shouldn’t forget that at the end of the day business expectations, however exotic or esoteric, will have to feed very conformist logical beasts.

Further Readings

AlphaGo: From Intuitive Learning to Holistic Knowledge

Brawn & Brain

Google’s AlphaGo recent success against Europe’s top player at the game of Go is widely recognized as a major breakthrough for Artificial Intelligence (AI), both because of the undertaking (Go is exponentially more complex than Chess) and time (it has occurred much sooner than expected). As it happened, the leap can be credited as much to brawn as to brain, the former with a massive increase in computing power, the latter with an innovative combination of established algorithms.

(Kunisada)
Brawny Contest around Aesthetic Game (Kunisada)

That breakthrough and the way it has been achieved may seem to draw opposite perspectives about the future of IA: either the current conceptual framework is the best option, with brawny machines becoming brainier and, sooner or later, will be able to leap over  the qualitative gap with their human makers; or it’s a quantitative delusion that could drive brawnier machines and helpless humans down into that very same hole.

Could AlphaGo and its DeepMind makers may point to a holistic bypass around that dilemma ?

Taxonomy of Sources

Taking a leaf from Spinoza, one could begin by considering the categories of knowledge with regard to sources:

  1. The first category is achieved through our senses (views, sounds, smells, touches) or beliefs (as nurtured by our common “sense”). This category is by nature prone to circumstances and prejudices.
  2. The second is built through reasoning, i.e the mental processing of symbolic representations. It is meant to be universal and open to analysis, but it offers no guarantee for congruence with actual reality.
  3. The third is attained through philosophy which is by essence meant to bring together perceptions, intuitions, and symbolic representations.

Whereas there can’t be much controversy about the first ones, the third category leaves room for a wide range of philosophical tenets, from religion to science, collective ideologies, or spiritual transcendence. With today’s knowledge spread across smart devices and driven by the wisdom of crowds, philosophy seems to look more at big data than at big brother.

Despite (or because of) its focus on the second category, AlphaGo and its architectural’s feat may still carry some lessons for the whole endeavor.

Taxonomy of Representations

As already noted, the effectiveness of IA’s supporting paradigms has been bolstered by the exponential increase in available data and the processing power to deal with it. Not surprisingly, those paradigms are associated with two basic forms of representations aligned with the source of knowledge, implicit for senses, and explicit for reasoning:

  • Designs based on symbolic representations allow for explicit information processing: data is “interpreted” into information which is then put to use as knowledge governing behaviors.
  • Designs based on neural networks are characterized by implicit information processing: data is “compiled” into neural connections whose weights (pondering knowledge ) are tuned iteratively on the basis of behavioral feedback.

Since that duality mirrors human cognitive capabilities, brainy machines built on those designs are meant to combine rationality with effectiveness:

  • Symbolic representations support the transparency of ends and the traceability of means, allowing for hierarchies of purposes, actual or social.
  • Neural networks, helped by their learning kernels operating directly on data, speed up the realization of concrete purposes based on the supporting knowledge implicitly embodied as weighted connections.

The potential of such approaches have been illustrated by internet-based language processing: pragmatic associations “observed” on billions of discourses are progressively complementing and even superseding syntactic and semantic rules in web-based parsers.

On that point too AlphaGo has focused ambitions since it only deals with non symbolic inputs, namely a collection of Go moves (about 30 million in total) from expert players. But that limit can be turned into a benefit as it brings homogeneity and transparency, and therefore a more effective combination of algorithms: brawny ones for actual moves and intuitive knowledge from the best players, brainy ones for putative moves, planning, and policies.

Teaching them how to work together is arguably a key factor of the breakthrough.

Taxonomy of Learning

As should be expected from intelligent machines, their impressive track record fully depends of their learning capabilities. Whereas those capabilities are typically applied separately to implicit (or non symbolic) and explicit (or symbolic) contents, bringing them under the control of the same cognitive engine, as humans brains routinely do, has long been recognized as a primary objective for IA.

Practically that has been achieved with neural networks by combining supervised and unsupervised learning: human experts help systems to sort the wheat from the chaff and then let them improve their expertise through millions of self-play.

Yet, the achievements of leading AI players have marked out the limits of these solutions, namely the qualitative gap between playing as the best human players and beating them. While the former outcome can be achieved through likelihood-based decision-making, the latter requires the development of original schemes, and that brings quantitative and qualitative obstacles:

  • Contrary to actual moves, possible ones have no limit, hence the exponential increase in search trees.
  • Original schemes are to be devised with regard to values and policies.

Overcoming both challenges with a single scheme may be seen as the critical achievement of DeepMind engineers.

Mastering the Breadth & Depth of Search Trees

Using neural networks for the evaluation of actual states as well as the sampling of policies comes with exponential increases in breath and depth of search trees. Whereas Monte Carlo Tree Search (MCTS) algorithms are meant to deal with the problem, limited capacity to scale up the processing power will nonetheless lead to shallow trees; until DeepMind engineers succeeded in unlocking the depth barrier by applying MCTS to layered value and policy networks.

AlphaGo seamless use of layered networks (aka Deep Convolutional Neural Networks) for intuitive learning, reinforcement, values, and policies was made possible by the homogeneity of Go’s playground and rules (no differentiated moves and search traps as in the game of Chess).

From Intuition to Knowledge

Humans are the only species that combines intuitive (implicit) and symbolic (explicit) knowledge, with the dual capacity to transform the former into the latter and in reverse to improve the former with the latter’s feedback.

Applied to machine learning that would require some continuity between supervised and unsupervised learning which would be achieved with neural networks being used for symbolic representations as well as for raw data:

  • From explicit to implicit: symbolic descriptions built for specific contexts and purposes would be engineered into neural networks to be tried and improved by running them on data from targeted environments.
  • From implicit to explicit: once designs tested and reinforced through millions of runs in relevant targets, it would be possible to re-engineer the results into improved symbolic descriptions.

Whereas unsupervised learning of deep symbolic knowledge remains beyond the reach of intelligent machines, significant results can be achieved for “flat” semantic playground, i.e if the same semantics can be used to evaluate states and policies across networks:

  1. Supervised learning of the intuitive part of the game as observed in millions of moves by human experts.
  2. Unsupervised reinforcement learning from games of self-play.
  3. Planning and decision-making using Monte Carlo Tree Search (MCTS) methods to build, assess, and refine its own strategies.

Such deep and seamless integration would not be possible without the holistic nature of the game of Go.

Aesthetics Assessment & Holistic Knowledge

The specificity of the game of Go is twofold, complexity on the quantitative side, simplicity on  the qualitative side, the former being the price of the latter.

As compared to Chess, Go’s actual positions and prospective moves can only be assessed on the whole of the board, using a criterion that is best defined as aesthetic as it cannot be reduced to any metrics or handcrafted expert rules. Players will not make moves after a detailed analysis of local positions and assessment of alternative scenarii, but will follow their intuitive perception of the board.

As a consequence, the behavior of AlphaGo can be neatly and fully bound with the second level of knowledge defined above:

  • As a game player it can be detached from actual reality concerns.
  • As a Go player it doesn’t have to tackle any semantic complexity.

Given a fitted harness of adequate computing power, the primary challenge of DeepMind engineers is to teach AlphaGo to transform its aesthetic intuitions into holistic knowledge without having to define their substance.

Further Readings

External Links