2018: Clones vs Octopuses

In the footsteps of robots replacing workmen, deep learning bots look to boot out knowledge workers overwhelmed by muddy data.

Cloning Knowledge (Tadeusz Cantor, from “The Dead Class”)

Faced with that , should humans try to learn deeper and faster than clones, or should they learn from octopuses and their smart hands.

Machine Learning & The Economics of Clones

As illustrated by scan-reading AI machines, the spreading of learning AI technology in every nook and cranny introduces something like an exponential multiplier: compared to the power-loom of the Industrial Revolution which substituted machines for workers, deep learning is substituting replicators for machines; and contrary to power looms, there is no physical limitation on the number of smart clones that can be deployed. So, however fast and deep humans can learn, clones are much too prolific: it’s a no-win situation. To get out of that conundrum humans have to put their hand on a competitive edge, e.g some kind of knowledge that cannot be cloned.

Knowledge & Competition

Appraising humans learning sway over machines, one can take from Spinoza’s categories of knowledge with regard to sources:

  1. Senses (views, sounds, smells, touches) or beliefs (as nurtured by the supposed common “sense”). Artificial sensors can compete with human ones, and smart machines are much better if prejudiced beliefs are put into the equation.
  2. Reasoning, i.e the mental processing of symbolic representations. As demonstrated by AlphaGo, machines are bound to fast extend their competitive edge.
  3. Philosophy which is by essence meant to bring together perceptions, intuitions, and symbolic representations. That’s where human intelligence could beat its artificial cousin which is clueless when purposes are needed.

That assessment is bore out by evolution: the absolute dominance established by humans over other animal species comes from their use of knowledge, which can be summarized as:

  1. Use of symbolic representations.
  2. Ability to formulate and exchange representations of contexts, concerns, and policies.
  3. Ability to agree on stakes and cooperate on policies.

On that basis, the third dimension, i.e the use of symbolic knowledge to cooperate on non-zero-sum endeavors, can be used to draw the demarcation line between human and artificial intelligence:

  • Paths and paces of pursuits as part and parcel of the knowledge itself. The fact that both are mostly obviated by search engines gives humans some edge.
  • Operational knowledge is best understood as information put to use, and must include concerns and decision-making. But smart bots’ ubiquity and capabilities often sap information traceability and decisions transparency, which makes room for humans to prevail.

So humans can find a clear competitive edge in this knowledge dimension because it relies on a combination of experience and thinking and is therefore hard to clone. Organizations should make sure that’s where smart systems take back and humans take up.

Organization & Innovation

Innovation being at the root of competitive edge, understanding the role played by smart systems is a key success factor; that is to be defined by organization.

As epitomized by Henry Ford, industrial-era thinking associated innovation with top-down management and the specialization of execution:

  • At execution level manual tasks were to be fragmented and specialized.
  • At management level analysis and decision-making were to be centralized and abstracted.

That organizational paradigm puts a double restraint on innovation:

  • On execution side the fragmentation of manual tasks prevents workers from effectively assessing and improving their performances.
  • On management side knowledge is kept in conceptual boxes and bereft of feedback from actual uses.

That railing between smart brains and dumb hands may have worked well enough for manufacturing processes limited to material flows and subject to circumscribed and predictable technological changes. It didn’t last.

First, as such hierarchies necessarily grow with processes complexity, overheads and rigidity force repeated pruning. Then, flat hierarchies are of limited use when information flows are to be combined with material ones, so enterprises have to start with matrix organization. Finally, with the seamless integration of digital and material flows, perpetuating the traditional line between management and execution is bound to hamstring innovation:

  • Smart tools may be able to perform a wide range of physical tasks without human supervision, but the core of innovation core as well as its front lines are where human and machines collaborate in processing a mix of material and information flows, both learning from the experience.
  • Hierarchies and centralized decision-making are being cut out from feeders when set in networked business environments colonized by smart bots on both sides of corporate boundaries.

Not surprisingly, these innovation trends seem to tally with the social dimension of knowledge.

Learning from the Octopus

The AI revolution has already broken all historical records of footprint (everything is affected) and speed (a matter of years). Given the length of human education cycles, appraising the consequences comes with some urgency, beginning with the disposal of two entrenched beliefs:

At individual level the new paradigm could be compared to the nervous system of octopuses: each arm gets its brain and neurons, and so its own touch of knowledge and taste of decision-making.

On a broader (i.e enterprise) perspective, knowledge should be supported by two organizational layers, one direct and innovation-driven between trusted co-workers, the other networked and knowledge-driven between remote workers, trusted or otherwise.

Postscript

Further Reading

External Links

Beans must be Counted, one way And the other

Preamble

Conversations across software engineering forums sometimes reveal unexpected views, as it’s the case for the benefits of accountability.

Counting Paper Beans (Pieter Brueghel the Younger)

One would assume that competition would impel enterprises to scrutiny with regard to resources employed and product outcomes, pushing for the assessment of internal activities based on some agreed metrics. And yet, now and again, software development is viewed as a boutique occupation, if not an art pursuit, carried out by creative craftsmen for enlightened if demanding patrons; a vocation too distinctive to be gauged by common yardsticks.

Difficulties of Oversight

Setting apart creative delusions, the assessment of software development is effectively confronted with rational as well as practical obstacles.

To begin with rationality, and unlike traditional products, there is no market pricing mechanism that could match software development costs with customers’ value. As a consequence business stakeholders and systems engineers prefer to play safe and keep their respective assessments on the opposed banks of the customer/provider divide.

As for the practicality of assessments, the choice is between idiosyncratic approaches (e.g users’ points) and reasoned ones (essentially function points). The former ones being by nature specific and subject to changes in business opportunities, whereas the latter ones are being plagued by implementation plights that make them both costly and unreliable.

Yet, the diluting of IT systems in business environments is making that conundrum irrelevant: the fusing of business processes and supporting software is blanketing the discontinuities between business value and development costs.

Perils of Oversight

Given the digital integration between systems and business environments and the part played by software in production, marketing and operations, enterprises can no longer ignore the economics of software development.

As far as enterprises are concerned, economics use prices for two key purposes, external and internal.

With regard to their business environment, enterprises need metrics to price the resources they could buy and the products they could sell; their competitive edge fully depends on the thoroughness and accuracy of both.

With regard to their internal governance, enterprises need metrics to gauge the efficiency of their factors and the maturity of their processes, and allocate resources accordingly. That internal assessment is the basis of their versatility and plasticity:

  • Confronted to continuous, frequent, and often abrupt changes in business environments, enterprise must be able to adapt their activities without having to change its architectures. That cannot be achieved without timely and accurate assessments of the way their resources are put to use.
  • Conversely, enterprises may have to change their architectures without affecting their performances; that cannot be achieved without a comprehensive and accurate assessments of alternative options, organizational as well as technical.

To summarize, the spread and intricacy of software footprint over both sides of the crumbling fences between enterprise systems and business environments makes software economics a necessary component of enterprises governance, so a tally of software beans should not be an option.

Further Reading

2017: What Did You Learn Last Year ?

Sometimes the future is best seen through rear-view mirrors; given the advances of artificial intelligence (AI) in 2016, hindsight may help for the year to come.

(J.Bosh)
Deep Mind Learning (J.Bosh)

Deep Learning & the Depths of Intelligence

Deep learning may not have been discovered in 2016 but Google’s AlphaGo has arguably brought a new dimension to artificial intelligence, something to be compared to unearthing the spherical Earth.

As should be expected for machines capabilities, artificial intelligence has for long been fettered by technological handcuffs; so much so that expert systems were initially confined to a flat earth of knowledge to be explored through cumbersome sets of explicit rules. But exponential increase in computing power has allowed neural networks to take a bottom-up perspective, mining for implicit knowledge hidden in large amount of raw data.

Like digging tunnels from both extremities, it took some time to bring together top-down and bottom-up schemes, namely explicit (rule-based) and implicit (neural network-based) knowledge processing. But now that it comes to fruition, the alignment of perspectives puts a new light on the cognitive and social dimensions of intelligence.

Intelligence as a Cognitive Capability

Assuming that intelligence is best defined as the ability to solve problems, the first criterion to consider is the type of input (aka knowledge) to be used:

  • Explicit: rational processing of symbolic representations of contexts, concerns, objectives, and policies.
  • Implicit: intuitive processing of factual (non symbolic) observations of objects and phenomena.

That distinction is broadly consistent with the one between humans, seen as the sole symbolic species with the ability to reason about explicit knowledge, and other animal species which, despite being limited to the processing of implicit knowledge, may be far better at it than humans. Along that understanding, it would be safe to assume that systems with enough computing power will sooner or later be able to better the best of animal species, in particular in the case of imperfect inputs.

Intelligence as a Social Capability

Alongside the type of inputs, the second criterion to be considered is obviously the type of output (aka solution). And since classifications are meant to be built on purpose, a typology of AI outcomes should focus on relationships between agents, humans or otherwise:

  • Self-contained: problem-solving situations without opponent.
  • Competitive: zero-sum conflictual activities involving one or more intelligent opponents.
  • Collaborative: non-zero-sum activities involving one or more intelligent agents.

That classification coincides with two basic divides regarding communication and social behaviors:

  1. To begin with, human behavior is critically different when interacting with living species (humans or animals) and machines (dumb or smart). In that case the primary factor governing intelligence is the presence, real or supposed, of beings with intentions.
  2. Then, and only then, communication may take different forms depending on languages. In that case the primary factor governing intelligence is the ability to share symbolic representations.

A taxonomy of intelligence with regard to cognitive (reason vs intuition) and social (symbolic vs non-symbolic) capabilities may help to clarify the role of AI and the importance of deep learning.

Between Intuition and Reason

Google’s AlphaGo astonishing performances have been rightly explained by a qualitative breakthrough in learning capabilities, itself enabled by the two quantitative factors of big data and computing power. But beyond that success, DeepMind (AlphaGo’s maker) may have pioneered a new approach to intelligence by harnessing both symbolic and non symbolic knowledge to the benefit of a renewed rationality.

Perhaps surprisingly, intelligence (a capability) and reason (a tool) may turn into uneasy bedfellows when the former is meant to include intuition while the latter is identified with logic. As it happens, merging intuitive and reasoned knowledge can be seen as the nexus of AlphaGo decisive breakthrough, as it replaces abrasive interfaces with smart full-duplex neural networks.

Intelligent devices can now process knowledge seamlessly back and forth, left and right: borne by DeepMind’s smooth cognitive cogwheels, learning from factual observations can suggest or reinforce the symbolic representation of emerging structures and behaviors, and in return symbolic representations can be used to guide big data mining.

From consumers behaviors to social networks to business marketing to supporting systems, the benefits of bridging the gap between observed phenomena and explicit causalities appear to be boundless.

Further Reading

External Links

Business Agility vs Systems Entropy

Synopsis

As already noted, the seamless integration of business processes and IT systems may bring new relevancy to the OODA (Observation, Orientation, Decision, Action) loop, a real-time decision-making paradigm originally developed by Colonel John Boyd for USAF fighter jets.

Agility: Orientation (Lazlo Moholo-Nagy)
Agility & Orientation (Lazlo Moholo-Nagy)

Of particular interest for today’s business operational decision-making is the orientation step, i.e the actual positioning of actors and the associated cognitive representations; the point being to use AI deep learning capabilities to surmise opponents plans and misdirect their anticipations. That new dimension and its focus on information brings back cybernetics as a tool for enterprise governance.

In the Loop: OODA & Information Processing

Whatever the topic (engineering, business, or architecture), the concept of agility cannot be understood without defining some supporting context. For OODA that would include: territories (markets) for observations (data); maps for orientation (analytics); business objectives for decisions; and supporting systems for action.

OKBI_ooda
OODA loop and its actual (red) and symbolic (blue) contexts.

One step further, contexts may be readily matched with systems description:

  • Business contexts (territories) for observations.
  • Models of business objects (maps) for orientation.
  • Business logic (objectives) for decisions.
  • Business processes (supporting systems) for action.

ccc
The OODA loop and System Perspectives

That provides a unified description of the different aspects of business agility, from the OODA loop and operations to architectures and engineering.

Architectures & Business Agility

Once the contexts are identified, agility in the OODA loop will depend on architecture consistency, plasticity, and versatility.

Architecture consistency (left) is supposed to be achieved by systems engineering out of the OODA loop:

  • Technical architecture: alignment of actual systems and territories (red) so that actions and observations can be kept congruent.
  • Software architecture: alignment of symbolic maps and objectives (blue) so that orientation and decisions can be continuously adjusted.

Functional architecture (right) is to bridge the gap between technical and software architectures and provides for operational coupling.

Business Agility: systems architectures and business operations
Business Agility: systems architectures and business operations

Operational coupling depends on functional architecture and is carried on within the OODA loop. The challenge is to change tack on-the-fly with minimum frictions between actual and symbolic contexts, i.e:

  • Discrepancies between business objects (maps and orientation) and business contexts (territories and observation).
  • Departure between business logic (objectives and decisions) and business processes (systems and actions)

When positive, operational coupling associates business agility with its architecture counterpart, namely plasticity and versatility; when negative, it suffers from frictions, or what cybernetics calls entropy.

Systems & Entropy

Taking a leaf from thermodynamics, cybernetics defines entropy as a measure of the (supposedly negative) variation in the value of the information supporting the control of viable systems.

With regard to corporate governance and operational decision-making, entropy arises from faults between environments and symbolic surrogates, either for objects (misleading orientations from actual observations) or activities (unforeseen consequences of decisions when carried out as actions).

So long as architectures and operations were set along different time-frames (e.g strategic and tactical), cybernetics were of limited relevancy. But the seamless integration of data analytics, operational decision-making, and IT supporting systems puts a new light on the role of entropy, as illustrated by Boyd’s OODA and its orientation component.

Orientation & Agility

While much has been written about how data analytics and operational decision-making can be neatly and easily fitted in the OODA paradigm, a particular attention is to be paid to orientation.

As noted before, the concept of Orientation comes with a twofold meaning, actual and symbolic:

  • Actual: the positioning of an agent with regard to external (e.g spacial) coordinates, possibly qualified with the agent’s abilities to observe, move, or act.
  • Symbolic: the positioning of an agent with regard to his own internal (e.g beliefs or aims) references, possibly mixed with the known or presumed orientation of other agents, opponents or associates.

That dual understanding underlines the importance of symbolic representations in getting competitive edges, either directly through accurate and up-to-date orientation, or indirectly by inducing opponents’ disorientation.

Agility vs Entropy

Competition in networked digital markets is carried out at enterprise gates, which puts the OODA loop at the nexus of information flows. As a corollary, what is at stake is not limited to immediate business gains but extends to corporate knowledge and enterprise governance; translated into cybernetics parlance, a competitive edge would depend on enterprise ability to export entropy, that is to decrease confusion and disorder inside, and increase it outside.

Working on that assumption, one should first characterize the flows of information to be considered:

  • Territories and observations: identification of business objects and events, collection and analysis of associated data.
  • Maps and orientations: structured and consistent description of business domains.
  • Objectives and decisions: structured and consistent description of business activities and rules.
  • Systems and actions: business processes and capabilities of supporting systems.

cccc
Static assessment of technical and software architectures for respectively observation and decision

Then, a static assessment of information flows would start with the standing of technical and software architecture with regard to competition:

  • Technical architecture: how the alignment of operations and resources facilitate actions and observations.
  • Software architecture: how the combined descriptions of business objects and logic facilitate orientation and decision.

A dynamic assessment would be carried out within the OODA loop and deal with the role of functional architecture in support of operational coupling:

  • How the mapping of territories’ identities and features help observation and orientation.
  • How decision-making and the realization of business objectives are supported by processes’ designs.

ccccc
Dynamic assessment of decision-making and the realization of business objectives’ as supported by processes’ designs.

Assuming a corporate cousin of  Maxwell’s demon with deep learning capabilities standing at the gates in its OODA loop, his job would be to analyze the flows and discover ways to decrease internal complexity (i.e enterprise representations) and increase external one (i.e competitors’ representations).

That is to be achieved with the integration of  operational analytics, business intelligence, and decision-making.

OKBI_BIDM
Seamless integration of operational analytics, business intelligence, and decision-making.

Further Readings

Business Agility & the OODA Loop

Preamble

The OODA (Observation, Orientation, Decision, Action) loop is a real-time decision-making paradigm developed in the sixties by Colonel John Boyd from his experience as fighter pilot and military strategist.

moholy nagy spheres1
How to get inside opponent’s loop (Lazlo Moholy-Nagy)

The relevancy of OODA for today’s operational decision-making comes from the seamless integration of IT systems with business operations and the resulting merits of agile development processes.

Business: End of Discrete Time-Frames

Business governance was used to be phased: analyze the market, select opportunities, build capabilities, launch operations. No more. With the melting of the fences between actual and symbolic realms, periodic transitional events have lost most of their relevancy. Deprived of discrete and robust time-frames, the weaving of observed facts with business plans has to be managed on the fly. Success now comes from continuous readiness, quicker tempo, and the ability to operate inside adversaries’ time-scales, for defense (force competitor out of favorable position) as well as offense (get a competitive edge). Hence the reference to dogfights.

Dogfights & Agile Primacy

John Boyd train of thoughts started with the observation that, despite the apparent superiority of the soviet Mig 15 on US F-86 during the Korea war, US fighters stood their ground. From that factual observation it took Boyd’s comprehensive engineering work to demonstrate that as far as dogfights were concerned fast transients between maneuvers (aka agility) was more important than technical capabilities. Pushed up Pentagon’s reluctant ladders by Boyd’s sturdy determination, that conclusion have had wide-ranging consequences in the design of USAF fighters and pilots formation for the following generations. Its influence also spread to management, even if theories’ turnover is much faster there, and shelf-life much shorter.

Nowadays, with the accelerated integration of business processes with IT systems, agility is making a comeback from the software engineering corner. Reflecting business and IT convergence, principles like iterative development, just-in-time delivery, and lean processes, all epitomized by the agile software development model, are progressively mingling into business practices with strong resemblances to dogfights; and the resemblances are not only symbolic.

IT Systems & Business Competition

While some similarities between dogfights and business competition may seem metaphorical, one critical aspect is all too real, namely the increasing importance of supporting machines, IT systems or fighter jets.

Basically, IT systems, like fighters’ electronics, are tasked to observe environments, analyse changes in relation to position and objectives, and support decision-making. But today’s systems go further with two qualitative leaps:

  • The seamless integration of physical and symbolic flows let systems manage some overlapping between supporting decisions and carrying out actions.
  • Due to their artificial intelligence capabilities, systems can learn on-the-job and improve their performances in real-time feedback loops.

When combined, these two trends have drastic impact on the way machines can support human activities in real-time competitive situations. More to the point, they bring new light on business agility.

Business Agility

As illustrated by the radical transformation of fighter cockpits, the merging of analog and digital flows leaves little room for human mediation: data must be processed into information and presented instantly along two critical dimensions, one for decision-making, the other for information life-cycle:

  • Man/Machine interfaces have to materialize the merging of actual and symbolic realms as to support just-in-time decision-making.
  • The replacement of phased selected updates of environment data by continuous changes in raw and massive data means that the status of information has to be incorporated with the information itself, yet without impairing decision-making.

Beyond obvious differences between dogfights and business competition, that double exigence is to characterize business agility:

  1. Observation: understanding the nature, origin, and time-frame of changes in business environments (aka territories).
  2. Orientation: assessment of the reliability and shelf-life of pertaining information (aka maps) with regard to stakes and current positions and operations.
  3. Decision: weighting of options with regard to enterprise stakes and capabilities.
  4. Action: carrying out of decisions according to stakes and time-frames.

That understanding of business agility is to be compared with its development and architecture cousins. Yet it doesn’t seem to add much to data analytics and operational decision-making. That is until the concepts of observation and orientation are reassessed with regard to EA maps and territories.

Using OODA blueprint to integrate business intelligence and operational decision-making into enterprise architecture.

Such integration is to become a primary success factor for enterprises immersed in digital environment.

Agility & Orientation: Task vs Tack

To begin with basics, the concept of Orientation comes with a twofold meaning, actual and symbolic:

  • Actual: a position with regard to external (e.g spacial) coordinates, possibly qualified with abilities to observe, move, or act.
  • Symbolic: a position with regard to internal (e.g beliefs or aims) references, possibly mixed with known or presumed orientation of other agents, opponents or associates.

When business is considered, data analytics is supposed to deal comprehensively and accurately with markets’ actual orientations. But the symbolic facet is left largely unexplored.

Boyd’s contribution is to bring together both aspects and combine them into actual practice, namely how to foretell the tack of your opponents from their actual tracks as well as their surmised plans, while fooling them about your own moves, actual or planned.

Such ambitions, once out of reach, can now be fulfilled due to the combination of big data, artificial intelligence, and the exponential growth on computing power.

Further Readings

Models as Parachutes

Preamble

The recent paralysis of British Airways world operations (due to a power failure, if officials are to be believed), following the crash of Delta Airlines’ reservation system and a number of similar incidents, once again points to the reliability of large and critical IT systems.

László Moholy-Nagy-para
Models as Parachutes (László Moholy-Nagy)

Particularly at risk are airlines or banking systems, whose seasoned infrastructures, at the cutting edge when introduced half a century ago, have been strained to their limit by waves of extensive networked new functionalities. Confronted to the magnitude and complexity of overall modernization, most enterprises have preferred piecemeal updates to architectural leaps. Such policies may bring some respite, but they may also turn into aggravating factors, increasing stakes and urgency as well as shortening odds.

Assuming some consensus about stakes, hazards, and options, the priority should be to overcome jumping fears by charting a reassuring perspective in continuity with current situation. For that purpose models may provide heartening parachutes.

Models: Intents & Doubts

Models can serve two kinds of purposes:

  • Describe business contexts according to enterprise objectives, foretell evolution, and simulate policies.
  • Prescribe the architecture of supporting systems and the design of software components.

Business analyst figure maps from territories, software architects create territories from maps
Models Purposes: Describe contexts & concerns, Design supporting systems

Frameworks were supposed to combine the two perspectives, providing a comprehensive and robust basis to systems governance. But if prescriptive models do play a significant role in engineering processes, in particular for code generation, they are seldom fed by their descriptive counterpart.

Broadly speaking, the noncommittal attitudes toward descriptive models comes from a rooted mistrust in non executable models: as far as business analysts and software engineers are concerned, such models can only serve as documentary evidence. And since prescriptive models are by nature grounded to systems’ inner making, there is no secure conceptual apparatus linking systemic changes with their technical consequences. Hence the jumping frights.

Overcoming those frights could be achieved by showing the benefits of secure and soft landings.

Models for Secure Landings

As any tools, models must be assessed with regard to their purpose: prescriptive ones with regard to feasibility and reliability of architectures and design, descriptive ones with regard to correctness and consistency. As already noted, compared to what has been achieved for the former, nothing much has been done about the validity of the latter.

Yet, and contrary to customary beliefs, the rigorous verification of descriptive (aka extensional) models is not a dead-end. Of course these models can never be proven true because there is no finite scope against which they could be checked; but it doesn’t mean that nothing can be done to improve their reliability:

Models must be assessed with regard to their purpose
How to Check for secure landings

  • Correctness: How to verify that all the relevant individuals and features are taken into account. That can only be achieved empirically by building models open to falsification.
  • Consistency: How to verify that the symbolic descriptions (categories and connectors) are complete, coherent and non redundant across models and abstraction levels. That can be formally verified.
  • Alignment: How to verify that current and required business processes are to be seamlessly and effectively supported by systems architectures. That can be managed by introducing a level of indirection, as illustrated by MDA with platform independent models (PIMs) set between computation independent (CIMs) and platform specific (PSMs) ones.

Once established on secure grounds, models can be used to ensure soft landings.

Models for Soft Landings

Set within model based system engineering frameworks, models will help to replace piecemeal applications updates by seamless architectures modernization:

  • Systems: using models shift the focus of change from hardware to software.
  • Enterprise: models help to factor out the role of organization and regulations.
  • Project management: models provide the necessary hinge between agile and phased projects, the former for business driven applications, the latter for architecture oriented ones. Combining both approaches will ensure than lean and just-in-time processes will not be sacrificed to system modernization.

Seamless architectures modernization (a) vs Piecemeal applications updates (b).
Seamless architectures modernization (a) vs Piecemeal applications updates (b).

More generally, and more importantly, models are the option of choice (if not the only one) for enterprise knowledge management:

  • Business: Computation independent models (CIMs), employed to trace, justify and rationalize business strategies and processes portfolios.
  • Systems: Platform specific models (PSMs), employed to trace, justify and rationalize technical alternatives and decisions.
  • Decision-making and learning: Platform independent models (PIMs), employed to align business and systems and support enterprise architecture governance.

And knowledge management is arguably the primary factor for successful comprehensive modernization.

Strategic Decision-making: Cash or Crash

Governance is all about risks and decision-making, but investing on truly fail-safe systems for airlines or air traffic control can be likened to a short bet on the Armageddon, and that cannot be easily framed in a neat cost-benefit analysis. But that may be the very nature of strategic decision-making: not amenable to ROI but aiming at risks assessment and the development of the policies apt to contain and manage them. That would be impossible without models.

Further Reading

Agile Collaboration & Enterprise Creativity

Open-plan offices and social networks are often seen as significant factors of collaboration and innovation, breeding and nurturing the creativity of knowledge workers, weaving their ideas into webs of truths, and molding their minds into some collective intelligence.


Trust & Communication (Juan Munoz)

Yet, as creativity comes with agility, knowledge workflows should give brains enough breathing space lest they get more pressure than pasture.

Collaboration & Thinking Flows

Collaboration is a means to an end. To be of any use exchanges have to be fed with renewed ideas and assumptions, triggering arguments and adjustments, and opening new perspectives. If not they may burn themselves out with hollow considerations blurring clues and expectations, clogging the channels, and finally stemming the thinking flows.

Taking example from lean manufacturing, the first objective should be to streamline knowledge workflows as to eliminate swirling pools of squabbles, drain stagnant puddles of stale thoughts, and gear collaboration to flowing knowledge streams. As illustrated by flood irrigation, the first step is to identify basin levels.

Dunbar Numbers & Collaboration Basins

Studying the grooming habits of social primates, psychologist Robin Dunbar came to the conclusion that the size of social circles that individuals of a living species can maintain is set by the size of brain’s neocortex. Further studies have confirmed Dunbar’s findings, with the corresponding sizes for humans set around 10 for trusted personal groups and 150 for untried social ones. As it happens, and not by chance, those numbers seem to coincide with actual observations: the former for personal and direct collaboration, the latter for social and mediated collaboration.

Based on that understanding, the objective would be to organize knowledge workflows across two primary basins:

  • On-site and face-to-face collaboration with trusted co-workers. Corresponding interactions would be driven by personal dispositions and attitudes.
  • On-line and networked collaboration with workers, trusted or otherwise. Corresponding interactions would be based on shared interests and past exchanges.

Knowledge Workflows

The aim of knowledge workflows is to process data into information and put it to use. That is to be achieved by combining different kinds of tasks, in particular:

  • Data and information management: build the symbolic descriptions of contexts, concerns, and means.
  • Objectives management: based on a set of symbolic descriptions, identify and refine opportunities together with the ways to realize them.
  • Tasks management: allocate rights and responsibilities across organizations and collaboration frames, public and shallow or personal and deep.
  • Flows management: monitor and manage actual flows, publish arguments and propositions, consolidate decisions, …

Taking into account constraints and dependencies between the tasks, the aims would be to balance creativity and automation while eliminating superfluous intermediate products (like documents or models) or activities (e.g unfocused meetings).

With regard to dependencies, KM tasks are often intertwined and cannot be carried out sequentially; moreover, as illustrated by the impact of “creative accounting” on accounted activities, their overlapping is not frozen but subject to feedback, changes and adjustments.

With regard to automation, three groups are to be considered: the first requires only raw processing power and can be fully automated; the second also involves some intelligence that may be provided by smart systems; and the third calls for decision-making that can only be done by human agents entitled by the organization.

At first sight some lessons could be drawn from lean manufacturing, yet, since knowledge processes are not subject to hardware constraints, agile approaches should provide a more informative reference.

Iterative Knowledge Processing

A simple preliminary step is to check the applicability of agile principles by replacing “software” by “knowledge”. Assuming that ground is secured, the core undertaking is to consider what would become of cycles and iterations when applied to knowledge processing:

  • Cycle invariants: tasks would be iterated on given sets of symbolic descriptions applied to the state of affairs (contexts, concerns, and means).
  • Iterations content: based on those descriptions data would be processed into information, changes would be monitored, and possibilities explored.
  • Exit condition: cycles would complete with decisions committing changes in the state of affairs that would also entail adjustments or changes in symbolic descriptions.

That scheme meets three of the basic tenets of the agile paradigm, i.e open scope (unknowns cannot be set in advance), continuity of delivery (invariants are defined and managed by knowledge workers), and users in driving seats (through exit conditions). Yet it still doesn’t deal with creativity and the benefits of collaboration for knowledge workers.

Thinking Space & Pace

The scope of creativity in processes is neatly circumscribed by the nature of flows, i.e the possibility to insert knowledge during the processing: external for material flows (e.g in manufacturing), internal for symbolic flows (e.g in software engineering and knowledge processing).

Yet, whereas both software engineering and knowledge processes come with some built-in capability to redefined their symbolic flows on-the-fly, they don’t grant the same room to creativity. Contrary to software engineering projects which have to close their perspectives on the delivery of working products, knowledge processes are meant to keep them open to new understandings and opportunities. For the former creativity is the means to an end, for the latter it’s the end in itself, with collaboration as means.

Such opposite perspectives have direct consequences for two basic agile collaboration mechanisms: backlog and time-boxing:

  • Backlogs are used to structure and manage the space under exploration. But contrary to software processes whose space is focused and structured by users’ needs, knowledge processes are supposed to play on workers’ creativity to expand and redefine the range under consideration.
  • Time-boxes are used to synchronize tasks. But with creativity entering the fray, neither space granularity or thinking pace can be set in advance and coerced into single-sized boxes. In that case individuals must remain in full control of the contents and stride of their thinking streams.

It ensues that when creativity is the primary success factor standard agile collaboration mechanisms are falling short and intelligent collaboration schemes are to be introduced.

Creativity & Collaboration Tiers

The synchronization of creative activities has to deal with conflicting objectives:

  • On one hand the mental maps of knowledge workers and the stream of their thoughts have to be dynamically aligned.
  • On the other hand unsolicited face-to-face interactions or instant communications may significantly impair the course of creative thinking.

When activities, e.g software engineering, can be streamlined towards the delivery of clearly defined outcomes, backlogs and time-boxes can be used to harness workers’ creativity. When that’s not the case more sophisticated collaboration mechanisms are needed.

Assuming that mediated collaboration has a limited impact on thinking creativity (emails don’t have to be answered, or even presented, instantly), the objective is to steer knowledge workflows across a two-tiered collaboration framework: one personal and direct between knowledge workers, the other social and mediated through enterprise or institutional networks.

On the first tier knowledge workers would manage their thinking flows (content and tempo) independently, initiating or accepting personal collaboration (either through physical contact or some kind of instant messaging) depending on their respective “state of mind”.

The second tier would be for social collaboration and would be expected to replace backlogs and time-boxing. Proceeding from the first to the second tier would be conditioned by workers’ needs and expectations, triggered on their own initiative or following prompts.

From Personal to Collective Thinking

The challenging issue is obviously to define and implement the mechanisms governing the exchanges between collaboration tiers, e.g:

  • How to keep tabs on topics and contents to be safeguarded.
  • How to mediate (i.e filter and time) the solicitations and contribution issued by the social tier.
  • How to assess the solicitations and contribution issued by individuals.
  • How to assess and manage knowledge deemed to remain proprietary.
  • How to identify and manage knowledge workers personal and social circles.

Whereas such issues are customary tackled by various AI systems (knowledge management, decision-making, multi-players games, etc), taken as a whole they bring up the question of the relationship between personal and collective thinking, and as a corollary, the role of organization in nurturing corporate innovation.

Conclusion: Collaboration Spaces vs Panopticon

As illustrated by the rising of futuristic headquarters, leading technology firms have been trying to tackle these issues by redefining internal architecture as collaboration spaces. Compared to traditional open spaces, such approaches try to fuse physical and digital spaces into overlapping layers of collaboration spaces, using artificial intelligence to harness cooperation.

Yet, lest uniform and comprehensive transparency brings the worrying shadow of a panopticon within which everyone can be unknowingly observed, working spaces have to be designed as to enhance collaboration without trespassing on privacy.

That could be achieved with a layered transparency set along the nature of collaboration:

  • Immediate and personal: working cells regrouping 5 to 10 workstations earmarked for a task and used indifferently by teams members.
  • Delayed and personal: open physical spaces accommodating working cells, with instant messaging and geo-localization; spaces are hinged on domains and focused on shared knowledge.
  • On-line and networked: digital spaces merging physical spaces and organizational structures.

That mix of physical and virtual spaces could be dynamically redefined depending on activities, projects, location, and organisation.

Further Readings

External Links

Operational Intelligence & Decision Making

Preamble

According to a leading tools provider operational intelligence (OI) is the ability to “discover and analyze relationships between business events and corresponding IT events”.

Sigmar Polke
Operational Decision-making (Sigmar Polke)

From a marketing perspective, the moniker suggests some kind of cross-breeding between operational research, artificial intelligence, and real-time analytics. Yet, behind vendor dressing, problems and policies remain the ones traditionally dealt with by decision-making and knowledge management, and as far as marketing is concerned, pitches will hardly affect the assessment of field professionals.

Nevertheless, functional pitches may have a deeper influence if they try to outline the aims of operational intelligence to the people directly involved, affecting the way problems are understood and dealt with. That may be the case if business and system events are seemed to be put on a par: overlooking the directed dependency between actual events and their systems counterparts can critically hamper the very capabilities of systems decision-making.

Facts, Data, & Information

The new connected world of human brains and smart things have scaled down space and time by orders of magnitude, up to the point that events seem to come out as soon as they happen, wherever that may be. Facts and updates, that once were incoming as discrete and manageable batches of information, are now bursting continuously and massively as seamless streams of data that have to be processed on-the-fly into information lest they be cannibalized by ambient noise. That new configuration blurs the distinction between operational data (pushed, shallow, transient) and underlying information (pulled, deep, persistent), making it unworkable, if not meaningless altogether.

Taking inventories decisions as an example, traditional schemes rely on periodic readings of actual inventories and sales crossed with market foresight. Now, with on-line sales and the internet of things, real-time data can be used to build on-the-fly indicators whose biases and inaccuracy would be dynamically readjusted on the basis of information built on hindsight. At any given time (t), decision-makers will be presented with actual observations (a),  initial estimations of previous observations (b1, b2), and revised estimations of previous observations (c).

DataInfoKnow_OpIntel0
At any given time (t), decision-makers are presented with actual observations (a), initial estimations of previous observations (b1, b2), and revised estimations of previous observations (c).

Set along this framework, the debate about big data can be misleading as it puts the focus on the quantity of data feeding the processes, overlooking the process itself and the distinction between data, information, and knowledge.

Information, Knowledge, & Decision-making

Generally speaking, the distinction between data and information can be set with reference to time and context, data being instant and standalone, and information associated to a shelf life and domain. With regard to decision-making, it would mean that data can be directly used within the context of the current activities and circumstances; e.g, whereas on-line sales data may (or may not) be directly (i.e despite inaccuracies and biases) used to allocate inventories across depots, it has to be “mined” into consolidated information before being used in the broader perspective of inventories planning.

Compared to the transition between data and information, which is carried out by adding time and context, the one between information and knowledge is best understood in terms of decision-making.

Information is obtained by anchoring data to time-frames and contexts, knowledge is acquired by putting information to use.
Information is obtained by anchoring data to time-frames and contexts, knowledge is acquired by putting information to use.

Decisions are best defined as commitments set against some unknown circumstances: somebody, somewhere, or sometime. First, it ensues that decision-making calls for specific and timed information that has to be maintained up-to-date until decisions are taken. Then, taking decisions introduces some irreversible change in the state of affairs or expectations, making potentially obsolete all relevant information. So it may be argued that decisions is what transform information into knowledge.

Operational Intelligence: Objectives & Tools

Assuming decisions mark the nexus between information and knowledge, operational intelligence could be defined as the ability to put information to use, that ability being supported by the analysis of the relationships between business events and corresponding IT events.

Far from being academic, that distinction is essentially pragmatic as it marks the boundary between OI objectives and tools capabilities:

  • The aim of OI is to make sense (and profit) from the dynamic relationship between business (aka external) events on one hand, business objectives and enterprise capabilities on the other hand.
  • The role of supporting tools is to define and manage IT (aka internal) events used to reflect external ones and analyze them.

Whereas business events (red) represent change in the state of affairs, IT events (blue) only represent changes in associated information.
Whereas business events (red) represent change in the state of affairs, IT events (blue) only represent changes in associated information.

Since IT events are artifacts built on purpose there isn’t much to discover or analyze about them; not to mention the fact that confusing business events and their IT shadows is bound to undermine the whole decision-making process. So what is at stake for OI is how to design IT events as to timely and accurately trail the relevant business events.

Operational Intelligence & Actual Knowledge

As already noted, operational intelligence (OI) is about decision-making, which entails changing the state of objects, processes, or expectations. Compared to knowledge management (KM) which may or may not be time-related, OI is inherently bound to the actual state of affairs: on one hand it relies on specific and timed information, on the other hand it renders that information obsolete when it triggers decisions.

At the risk of oversimplification, operational intelligence can first be understood as a combination of traditional disciplines:

  • Data-mining is to filter facts and events, capture data, and analyze it into information.
  • Knowledge management chart information with regard to business objectives and enterprise capabilities.
  • Decision-making manage time-stamps and plan commitments subject to accuracy and likelihood.

But the specificity of operational intelligence is to be found in the way these functions are intertwined and cross-fed by operational concerns.

To begin with, data mining can be dynamically adjusted depending on what is needed for decision-making, and when. As a corollary, with the benefits of data so cooked in advance, some decisions can be taken directly, bypassing the mediation (and delays) of information processing. From a cognitive point of view that would be the equivalent of non symbolic (aka implicit) knowledge to be processed by neuronal networks.

Parceling out OI objectives
Decision-making and differentiated knowledge management

Conversely, information processing could benefit from operational feedback so that knowledge management would be driven by business value, and the supporting information weighted by timing and shelf-life considerations. Whereas part of it could be done through implicit connections, it would be more comprehensively and explicitly achieved through symbolic representations.

Operational Intelligence: Signals vs Symbols

Assuming that intelligence is the ability to figure out situations and solve problems, one may conclude that it is inherently operational. Along the same reasoning, if knowledge is information put to use, it may be implicit as well as explicit.

Nonetheless, the merit of operational intelligence is to bring to a single functional roof symbolic and non symbolic knowledge, the former explicit, using mediation of semantic constructs and used to weight information and support managed decisions, the latter implicit, using direct associations between actual objects or phenomena, and supporting automated decisions.

Further Readings

New Year: Regrets or Expectations ?

zinebSedira_stairs2Sky
Missed or Expected ? (Zineb Sedira)

Do Calendars Still Matter ?

Whereas financial results are being established on an annual basis, events nowadays unfold within a seamless space/time dimension:

  • Social networks put long-planned strategies at the mercy of consumers weekly whims often unconcerned by borders or products intents.
  • Mining of big data may curtail innovative head-starts to a matter of weeks if not days.
  • On-line (not to mention high frequency) trading put markets sanction at investors fingers tips, and enterprises’ stocks at predators’ claws.

So why should enterprises bother with yearly schedules ?

Evolutionary Arms Race Doesn’t Wait for Saint Sylvester

Business is governed by the same rule as nature, namely the survival of the fittest, and as with biological ecosystems, enterprises’ individual fitness depends on their relationships with others. Hence, as suggested by Lewis Carroll’s Red Queen, the survival of enterprises in their evolutionary arms race doesn’t depend on their absolute speed set against some time-frame, but on the relative one set with regard to competitors. In that case Saint Sylvester could be ignored. Or should it ? Because seasons do play their part in biological races.

From Race to Game

As it happens, business races are becoming more complicated as extensive and ubiquitous information and communication systems redefine the traditional predator-prey casting. With time and space cut down to symbolic dimensions, collaboration and competition can no longer be safely allocated to time-spans and market segments, which entails that the roles of predator and prey can be upended on the spur of moment and the turn of a switch.

Introducing this kind of option transforms tracks into playgrounds and races into games because there is no point in running without knowing who to run against and who to run with. Adding to the challenge, these playgrounds are moving ones, and so the rules of the game: one week a non-zero sum game, the next a winner-gets-all. That is when calendars make a come-back.

Games come with Boards and Time Scales

Contrary to races, games have comprehensive and detailed rules, or regulations in business parlance. As soon as enterprises start to play with roles they have to meet conditions, follow procedures, and take into account specific constraints; all defined with regard to institutional spaces and calendar time, e.g:

  • Customers’ behavior is often seasonal.
  • Regulatory bodies rule within geographical borders and their decisions stand for calendar periods.
  • M&A have to align with stock markets schedules

So, assuming that institutional time-frames cannot be avoided when strategies are set, Saint Sylvester may provide a practical meter for yearly moves.

When to Board a Shuttle

Enterprises have therefore to set their decision-making with regard to the accelerating pulse of business events on one hand, institutional time-boxes on the other hand. That corresponds to a typical shuttle scheme, with taking a decision being associated with boarding a shuttle.

Along that understanding, and assuming that taking a decision means closing alternative options, boarding a shuttle excludes some of the next one(s), and missing it may be beneficial as well as detrimental. That’s the reasoning behind the Last Responsible Moment principle which, when put to use for periodic decision-making, suggests that there can be as much risk at boarding a shuttle than at missing it.

Further Readings

 

 

Detour from Turing Game

Summary

Considering Alan Turing’s question, “Can machines think ?”, could the distinction between communication and knowledge representation capabilities help to decide between human and machine ?

vvvv
Alan Turing at 4

What happens when people interact ?

Conversations between people are meant to convey concrete, partial, and specific expectations. Assuming the use of a natural language, messages have to be mapped to the relevant symbolic representations of the respective cognitive contexts and intentions.

ccc
Conveying intentions

Assuming a difference in the way this is carried on by people and machines, could that difference be observed at message level ?

Communication vs Representation Semantics

To begin with, languages serve two different purposes: to exchange messages between agents, and to convey informational contents. As illustrated by the difference between humans and other primates, communication (e.g alarm calls directly and immediately bound to imminent menace) can be carried out independently of knowledge representation (e.g information related to the danger not directly observable), in other words linguistic capabilities for communication and symbolic representation can be set apart. That distinction may help to differentiate people from machines.

Communication Capabilities

Exchanging messages make use of five categories of information:

  • Identification of participants (Who) : can be set independently of their actual identity or type (human or machine).
  • Nature of message (What): contents exchanged (object, information, request, … ) are not contingent on participants type.
  • Life-span of message (When): life-cycle (instant, limited, unlimited, …) is not contingent on participants type.
  • Location of participants (Where): the type of address space (physical, virtual, organizational,…) is not contingent on participants type.
  • Communication channels (How): except for direct (unmediated) human conversations, use of channels for non direct (distant, physical or otherwise) communication are not contingent on participants type .

Setting apart the trivial case of direct human conversation, it ensues that communication capabilities are not enough to discriminate between human and artificial participants, .

Knowledge Representation Capabilities

Taking a leaf from Davis, Shrobe, and Szolovits, knowledge representation can be characterized by five capabilities:

  1. Surrogate: KR provides a symbolic counterpart of actual objects, events and relationships.
  2. Ontological commitments: a KR is a set of statements about the categories of things that may exist in the domain under consideration.
  3. Fragmentary theory of intelligent reasoning: a KR is a model of what the things can do or can be done with.
  4. Medium for efficient computation: making knowledge understandable by computers is a necessary step for any learning curve.
  5. Medium for human expression: one the KR prerequisite is to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.

On that basis knowledge representation capabilities cannot be used to discriminate between human and artificial participants.

Returning to Turing Test

Even if neither communication nor knowledge representation capabilities, on their own, suffice to decide between human and machine, their combination may do the trick. That could be achieved with questions like:

  • Who do you know: machines can only know previous participants.
  • What do you know: machines can only know what they have been told, directly or indirectly (learning).
  • When did/will you know: machines can only use their own clock or refer to time-spans set by past or planned transactional events.
  • Where did/will you know: machines can only know of locations identified by past or planned communications.
  • How do you know: contrary to humans, intelligent machines are, at least theoretically, able to trace back their learning process.

Hence, and given scenarii scripted adequately, it would be possible to build decision models able to provide unambiguous answers.

Reference Readings

A. M. Turing, “Computing Machinery and Intelligence”

Davis R., Shrobe H., Szolovitz P., “What is a Knowledge Representation?”

Further Reading