Repeated announces of looming software apocalypse may take some edge off vigilance, but repeated systems failures should be taken seriously, if only because they appear to be rooted in a wide array of causes, from wrongly valued single parameters (e.g 911 threshold or Apple’s free pass for “root” users) to architecture obsolescence (e.g reservation systems.)
Yet, if alarms are not to be ignored, prognoses should go beyond syndromes and remedies beyond sticking plaster: contrary to what is suggested by The Atlantic’s article, systems are much more than piles of code, and programming is probably where quality has been taken the most seriously.
Programs vs Systems
Whatever programmers’ creativity and expertise, they cannot tackle complexity across space, time, and languages: today’s systems are made of distributed interacting components, specified and coded in different languages, and deployed and modified across overlapping time-frames. Even without taking into account continuous improvements in quality, apocalypse is not to loom in the particulars of code but on their ways in the world.
Solutions should therefore be looked for at system level, and that conclusion can only be bolstered by the ubiquity of digitized business flows.
Systems are the New Babel
As illustrated by the windfalls benefiting Cobol old timers, language is arguably a critical factor, for the maintenance of legacy programs as well as for communication between stakeholders, users, and engineers.
So if problems can be traced back to languages, it’s where solutions are to be found: from programming languages (for code) to natural ones (for systems requirements), everything can be specified as symbolic representations, i.e models.
Model in the Loop
Models are generally understood as abstractions, and often avoided for that very reason. That shortsighted mind-set is made up for by concrete employs of abstractions, as illustrated by the Automotive industry and the way it embeds models in engineering processes.
Summarily, the Automotive’s Model in Loop (MiL) can be explained through three basic ideas:
Systems are to be understood as the combination of physical and software artifacts.
Insofar as both can be implemented as digits, they can be uniformly described as models.
As a consequence, analysis, design and engineering can be carried out through the iterative building, simulating, testing, and adjusting various combinations of hardware and software.
By bringing together physical components and code into a seamless digitized whole, MiL brings down the conceptual gap between actual elements and symbolic representations, aka models. And that leap could be generalized to a much wider range of systems.
Models are the New Code
Programming habits and the constraints imposed by the maintenance of legacy systems have perpetuated the traditional understanding of systems as a building-up of programs; hence the focus put on the quality of code. But when large, distributed, and perennial systems are concerned, that bottom-up mind-set falls short and brings about:
An exponential increase of complexity at system level.
Opacity and discontinued traceability at application level between current use and legacy code.
Both flaws could be corrected by combining top-down modeling and bottom-up engineering. That could be achieved with iterative processes carried out from both directions.
Model in the Loop meets Enterprise Architecture
From a formal perspective models are of two sorts: extensional ones operate bottom-up and associate sets of individuals with categories, intensional ones operate top-down and specify the features meant to be shared by all instances of a type. based on that understanding, the former can be used to simulate the behaviors of targeted individuals depending on categories, and the latter to prescribe how to create instances of types meant to implement categories.
As it happens, Model-in-loop combines the two schemes at component level:
Any combination of manual and automated solution can be used as a starting point for analysis and simulation (a).
Given the outcomes of simulation and tests, the architecture is revisited (b) and corresponding artifacts (software and hardware) are designed (c).
The new combination of artifacts are developed and integrated, ready for analysis and simulation (d).
Assuming that MiL bottom-up approach could be tallied with top-down systems engineering processes, it would enable a seamless and continuous integration of changes in software components and systems architectures.
All too often Enterprise Architecture (EA) is planned as a big bang project to be carried out step by step until completion. That understanding is misguided as it confuses EA with IT systems and implies that enterprises could change their architectures as if they were apparel.
But enterprise architectures are part and parcel of enterprises, a combination of culture, organization, and systems; whatever the changes, they must keep the continuity, integrity, and consistency of the whole.
Who: enterprise roles, system users, platform entry points.
What: business objects, symbolic representations, objects implementation.
How: business logic, system applications, software components.
When: processes synchronization, communication architecture, communication mechanisms.
Where: business sites, systems locations, platform resources.
These capabilities are set across architecture layers and support business, engineering, and operational processes.
Enterprise architects are to continuously assess and improve these capabilities with regard to current weaknesses (organizational bottlenecks, technical debt) or future developments (new business, M&A, new technologies).
Given the increased dependencies between business, engineering, and operations, defining EA workflows in terms of work units defined bottom-up from capabilities is to provide clear benefits with regard to EA versatility and plasticity.
Contrary to top-down (aka activity based) ones, bottom-up schemes don’t rely on one-fits-all procedures; as a consequence work units can be directly defined by capabilities and therefore mapped to engineering workshops:
Moreover, dependency constraints can be directly defined as declarative assertions attached to capabilities and managed dynamically instead of having to be hard-wired into phased processes.
That approach is to ensure two agile conditions critical for the development of architectural features:
Shared ownership: lest the whole enterprise be paralyzed by decision-making procedures, work units must be carried out under the sole responsibility of project teams.
Continuous delivery: architecture driven developments are by nature transverse but the delivery of building blocs cannot be put off by the decision of all parties concerned; instead it should be decoupled from integration.
Enterprise architecture projects could then be organized as a merry-go-round of capabilities-based work units to be set up, developed, and delivered according to needs and time-frames.
Enterprise architecture is about governance more than engineering. As such it has to ensure continuity and consistency between business objectives and strategies on one side, engineering resources and projects on the other side.
Assuming that capability-based work units will do the job for internal dependencies (application contents and engineering), the problem is to deal with external ones (business objectives and enterprise organization) without introducing phased processes. Beyond differences in monikers, such dependencies can generally be classified along three reasoned categories:
Operational: whatever can be observed and acted upon within a given envelope of assets and capabilities.
Tactical: whatever can be observed and acted upon by adjusting assets, resources and organization without altering the business plans and anticipations.
Strategic: decisions regarding assets, resources and organization contingent on anticipations regarding business environments.
The role of enterprise architects will then to manage the deployment of updated architecture capabilities according to their respective time-frames.
As noted before, EA workflows by nature can seldom be carried out in isolation as they are meant to deal with functional features across business domains. Instead, a portfolio of architecture (as opposed to development) work units should be managed according to their time-frame, the nature of their objective, and the kind of models to be used:
Strategic features affect the concepts defining business objectives and processes. The corresponding business objects and processes are primarily defined with descriptive models; changes will have cascading effects for engineering and operations.
Tactical features affect the definition of artifacts, logical or physical. The corresponding engineering processes are primarily defined with prescriptive models; changes are to affect operational features but not the strategic ones.
Operational features affect the deployment of resources, logical or physical. The corresponding processes are primarily defined with predictive models derived from descriptive ones; changes are not meant to affect strategic or tactical features.
Architectural projects could then be managed as a dynamic backlog of self-contained work units continuously added (a) or delivered (b).
Enterprise governance has to face combined changes in the way business times and spaces are to be taken into account. On one hand social networks put well-thought-out market segments and well planned campaigns at the mercy of consumers’ weekly whims. On the other hand traditional fences between environments and IT systems are crumbling under combined markets and technological waves.
So, despite (or because of) the exponential ability of intelligent systems to learn from circumstances, enterprise governance is not to cope with such dynamic complexities without a reliable compass set with regard to key primary factors: time-frames of concerns; control of processes; administration of artifacts.
Concerns & Time-frames
Confronted to massive and continuous waves of stochastic data flows, the priority is to position external events and decision-making with regard to business and assets time-frames:
Business value is to be driven by market opportunities which cannot be coerced into predefined fixed time-frames.
Assets management is governed by continuity and consistency constraints on enterprise identity, objectives, and investments along time.
Enterprises, once understood as standalone entities, must now be redefined as living organisms in continuous adaptation with their environment. Governance schemes must therefore be broadened to business environments and layered as to take into account the duality of time-frames: operational for business value, strategic for assets.
Control of processes and administration of artifacts can then be defined accordingly.
Time & Control: Processes
Architectures being by nature shared and persistent, their layers are meant to reflect different time-frames, from operational cycles to long-term assets:
At enterprise level the role of architectures is to integrate shared assets and align various objectives set along different time-frames. At this level it’s safe to assume some cross dependencies between processes, which would call for phased governance.
By contrast, business units are meant to be defined as self-governing entities pursuing specific objectives within their own time-frame. From a competitive perspective markets opportunities and competitors moves are best assumed unpredictable, and processes best governed by circumstances.
Processes can then be defined vertically (business or Systems) as well as horizontally (enterprise architecture or application development), and governance set accordingly:
At enterprise level processes are phased: stakeholders and architects plan and manage the development and deployment of assets (organization and systems).
At business units level processes are lean and just-in-time: business analysts and software engineers design and develop applications supporting users needs as defined by users stories or use cases.
Models are then to be introduced to describe shared assets (organization and systems) across the enterprise. They may also support business analysis and software engineering.
Carrying on with the four corners of governance square:
Business analysts are to set users’ narratives (concrete) in line with the business plots (blueprints) set by stakeholders.
Software engineers designing applications (concrete) in line with systems functional architectures (blueprints).
As for the overlapping of business and development time-frames, the direct mapping between concrete business and system corners (e.g though agile development) is to facilitate the governance of integrated actual and numeric flows across business and systems.
Conclusion: A Compass for Enterprise Architects
Behind turfs perimeters and jobs descriptions, roles and responsibilities involved in enterprise architecture can be summarized by four drives:
Business analysts (bottom left): define business processes with regard to broader objectives and engineering efficiency.
Software engineers (bottom right): maximize the value for users and the quality of applications.
Systems architects (top right): dynamically align systems with regard to business models and engineering processes.
Whereas roles and responsibilities will generally differ depending on enterprise environment, business, and culture, such a compass would ensure that the governance of enterprise architectures hinges on reliable pillars and is driven by clear principles.
As already noted, the seamless integration of business processes and IT systems may bring new relevancy to the OODA (Observation, Orientation, Decision, Action) loop, a real-time decision-making paradigm originally developed by Colonel John Boyd for USAF fighter jets.
Of particular interest for today’s business operational decision-making is the orientation step, i.e the actual positioning of actors and the associated cognitive representations; the point being to use AI deep learning capabilities to surmise opponents plans and misdirect their anticipations. That new dimension and its focus on information brings back cybernetics as a tool for enterprise governance.
In the Loop: OODA & Information Processing
Whatever the topic (engineering, business, or architecture), the concept of agility cannot be understood without defining some supporting context. For OODA that would include: territories (markets) for observations (data); maps for orientation (analytics); business objectives for decisions; and supporting systems for action.
One step further, contexts may be readily matched with systems description:
Business contexts (territories) for observations.
Models of business objects (maps) for orientation.
Business logic (objectives) for decisions.
Business processes (supporting systems) for action.
That provides a unified description of the different aspects of business agility, from the OODA loop and operations to architectures and engineering.
Architectures & Business Agility
Once the contexts are identified, agility in the OODA loop will depend on architecture consistency, plasticity, and versatility.
Architecture consistency (left) is supposed to be achieved by systems engineering out of the OODA loop:
Technical architecture: alignment of actual systems and territories (red) so that actions and observations can be kept congruent.
Software architecture: alignment of symbolic maps and objectives (blue) so that orientation and decisions can be continuously adjusted.
Functional architecture (right) is to bridge the gap between technical and software architectures and provides for operational coupling.
Operational coupling depends on functional architecture and is carried on within the OODA loop. The challenge is to change tack on-the-fly with minimum frictions between actual and symbolic contexts, i.e:
Discrepancies between business objects (maps and orientation) and business contexts (territories and observation).
Departure between business logic (objectives and decisions) and business processes (systems and actions)
Taking a leaf from thermodynamics, cybernetics defines entropy as a measure of the (supposedly negative) variation in the value of the information supporting the control of viable systems.
With regard to corporate governance and operational decision-making, entropy arises from faults between environments and symbolic surrogates, either for objects (misleading orientations from actual observations) or activities (unforeseen consequences of decisions when carried out as actions).
While much has been written about how data analytics and operational decision-making can be neatly and easily fitted in the OODA paradigm, a particular attention is to be paid to orientation.
As noted before, the concept of Orientation comes with a twofold meaning, actual and symbolic:
Actual: the positioning of an agent with regard to external (e.g spacial) coordinates, possibly qualified with the agent’s abilities to observe, move, or act.
Symbolic: the positioning of an agent with regard to his own internal (e.g beliefs or aims) references, possibly mixed with the known or presumed orientation of other agents, opponents or associates.
That dual understanding underlines the importance of symbolic representations in getting competitive edges, either directly through accurate and up-to-date orientation, or indirectly by inducing opponents’ disorientation.
Agility vs Entropy
Competition in networked digital markets is carried out at enterprise gates, which puts the OODA loop at the nexus of information flows. As a corollary, what is at stake is not limited to immediate business gains but extends to corporate knowledge and enterprise governance; translated into cybernetics parlance, a competitive edge would depend on enterprise ability to export entropy, that is to decrease confusion and disorder inside, and increase it outside.
Working on that assumption, one should first characterize the flows of information to be considered:
Territories and observations: identification of business objects and events, collection and analysis of associated data.
Maps and orientations: structured and consistent description of business domains.
Objectives and decisions: structured and consistent description of business activities and rules.
Systems and actions: business processes and capabilities of supporting systems.
Then, a static assessment of information flows would start with the standing of technical and software architecture with regard to competition:
Technical architecture: how the alignment of operations and resources facilitate actions and observations.
Software architecture: how the combined descriptions of business objects and logic facilitate orientation and decision.
A dynamic assessment would be carried out within the OODA loop and deal with the role of functional architecture in support of operational coupling:
How the mapping of territories’ identities and features help observation and orientation.
How decision-making and the realization of business objectives are supported by processes’ designs.
Assuming a corporate cousin of Maxwell’s demon with deep learning capabilities standing at the gates in its OODA loop, his job would be to analyze the flows and discover ways to decrease internal complexity (i.e enterprise representations) and increase external one (i.e competitors’ representations).
That is to be achieved with the integration of operational analytics, business intelligence, and decision-making.
The OODA (Observation, Orientation, Decision, Action) loop is a real-time decision-making paradigm developed in the sixties by Colonel John Boyd from his experience as fighter pilot and military strategist.
The relevancy of OODA for today’s operational decision-making comes from the seamless integration of IT systems with business operations and the resulting merits of agile development processes.
Business: End of Discrete Time-Frames
Business governance was used to be phased: analyze the market, select opportunities, build capabilities, launch operations. No more. With the melting of the fences between actual and symbolic realms, periodic transitional events have lost most of their relevancy. Deprived of discrete and robust time-frames, the weaving of observed facts with business plans has to be managed on the fly. Success now comes from continuous readiness, quicker tempo, and the ability to operate inside adversaries’ time-scales, for defense (force competitor out of favorable position) as well as offense (get a competitive edge). Hence the reference to dogfights.
Dogfights & Agile Primacy
John Boyd train of thoughts started with the observation that, despite the apparent superiority of the soviet Mig 15 on US F-86 during the Korea war, US fighters stood their ground. From that factual observation it took Boyd’s comprehensive engineering work to demonstrate that as far as dogfights were concerned fast transients between maneuvers (aka agility) was more important than technical capabilities. Pushed up Pentagon’s reluctant ladders by Boyd’s sturdy determination, that conclusion have had wide-ranging consequences in the design of USAF fighters and pilots formation for the following generations. Its influence also spread to management, even if theories’ turnover is much faster there, and shelf-life much shorter.
Nowadays, with the accelerated integration of business processes with IT systems, agility is making a comeback from the software engineering corner. Reflecting business and IT convergence, principles like iterative development, just-in-time delivery, and lean processes, all epitomized by the agile software development model, are progressively mingling into business practices with strong resemblances to dogfights; and the resemblances are not only symbolic.
IT Systems & Business Competition
While some similarities between dogfights and business competition may seem metaphorical, one critical aspect is all too real, namely the increasing importance of supporting machines, IT systems or fighter jets.
Basically, IT systems, like fighters’ electronics, are tasked to observe environments, analyse changes in relation to position and objectives, and support decision-making. But today’s systems go further with two qualitative leaps:
The seamless integration of physical and symbolic flows let systems manage some overlapping between supporting decisions and carrying out actions.
Due to their artificial intelligence capabilities, systems can learn on-the-job and improve their performances in real-time feedback loops.
When combined, these two trends have drastic impact on the way machines can support human activities in real-time competitive situations. More to the point, they bring new light on business agility.
As illustrated by the radical transformation of fighter cockpits, the merging of analog and digital flows leaves little room for human mediation: data must be processed into information and presented instantly along two critical dimensions, one for decision-making, the other for information life-cycle:
Man/Machine interfaces have to materialize the merging of actual and symbolic realms as to support just-in-time decision-making.
The replacement of phased selected updates of environment data by continuous changes in raw and massive data means that the status of information has to be incorporated with the information itself, yet without impairing decision-making.
Beyond obvious differences between dogfights and business competition, that double exigence is to characterize business agility:
Observation: understanding the nature, origin, and time-frame of changes in business environments (aka territories).
Orientation: assessment of the reliability and shelf-life of pertaining information (aka maps) with regard to stakes and current positions and operations.
Decision: weighting of options with regard to enterprise stakes and capabilities.
Action: carrying out of decisions according to stakes and time-frames.
That understanding of business agility is to be compared with its development and architecture cousins. Yet it doesn’t seem to add much to data analytics and operational decision-making. That is until the concepts of observation and orientation are reassessed with regard to EA maps and territories.
Using OODA blueprint to integrate business intelligence and operational decision-making into enterprise architecture.
To begin with basics, the concept of Orientation comes with a twofold meaning, actual and symbolic:
Actual: a position with regard to external (e.g spacial) coordinates, possibly qualified with abilities to observe, move, or act.
Symbolic: a position with regard to internal (e.g beliefs or aims) references, possibly mixed with known or presumed orientation of other agents, opponents or associates.
When business is considered, data analytics is supposed to deal comprehensively and accurately with markets’ actual orientations. But the symbolic facet is left largely unexplored.
Boyd’s contribution is to bring together both aspects and combine them into actual practice, namely how to foretell the tack of your opponents from their actual tracks as well as their surmised plans, while fooling them about your own moves, actual or planned.
Such ambitions, once out of reach, can now be fulfilled due to the combination of big data, artificial intelligence, and the exponential growth on computing power.
As Aristotle noted some time ago, plots are the backbone of any story as they uphold the causal sequence of events and actions: they provide the “why” of what happens, compared to narratives, which tell “how” what happened is being told.
So, in principle, plots deal with possibilities and narratives with realizations. But in fact plots remain unknown until being narrated; in other words fictions are like Schrödinger’s cat: there is no way to set possibilities and realizations apart.
That literary conundrum may convey some useful clues for business analysis, with stakeholders objectives seen as plots, and users’ stories as narratives.
Stakeholders’ Plots vs Users’ Narratives
With regard to the functionalities of supporting systems, a key issue for business analysts is to accommodate specific and/or short-term opportunities identified by business units with broader and long-standing objectives defined at corporate level.
Using the fictional metaphor, business expectations can be charted in terms of plots and narratives:
Business objectives (as plots) are meant to apply continuously and consistently to different agents, different concerns, and different contexts. As such they are best defined as rules and constraints (declarative schemes).
Users’ stories (as narratives) are supposed to translate as soon as possible into business transactions. As such they are best defined as sequences of operations governed by users’ choices (procedural schemes).
Then, just like narratives are meant to carry out the plots, users’ stories are supposed to follow the paths set by business objectives. But if confusion is to be avoided between strategic orientations, regulatory directives, and opportunist moves, the walk of business objectives and the talk of users’ stories should be termed differently.
Business Objectives (Plots): Symbolic & Allochronic
The definition of business objectives has to find its terms between the Charybdis of abstractions and the Scylla of specific business processes, the former to be avoided because they are by nature detached from reality and only make sense with regard to models, the latter because they would be too specific and restrictive. In-between, business objectives would be best defined through:
Strategic and financial objectives expressed using symbolic categories applied to environments, products, and resources.
Modal time-frames identified in reference to events and qualified by assumptions with regard to symbolic categories.
Business functions to be optimized given a set of constraints.
These could be comprehensively and consistently expressed with declarative languages.
Users’ Stories (Narratives): Actual & Contemporaneous
Users’ stories are at their best when tied to specific circumstances and purposes without being led away by modeling concerns. As narratives they should stick to agents, triggering events, and scripted sequences of options, operations, and outcomes:
Compared to the symbolic categories used for business objectives, users stories should refer to actual subsets of objects and events defined on contexts.
Contrary to the modal time-frames of business objectives, the scripts of users’ stories must be fully timed with regard to their triggering events.
That can only be expressed as procedures.
From Fiction to Artifacts: Aligning Business Objectives & Enterprise Architectures
Likening business analysis to its distant literary kin goes beyond the metaphor as it points to a practical organization of business objectives and users’ stories.
And the benefits of the distinction between declarative (for business plots) and procedural (for users’ narratives) blueprints is not limited to business analysis but can be extended to systems architecture (as plots) and software design (as narratives). On that basis declarative schemes could be applied to business functions and architectures capabilities, and procedural ones to users’ stories (or use cases) and software design.
Open-plan offices and social networks are often seen as significant factors of collaboration and innovation, breeding and nurturing the creativity of knowledge workers, weaving their ideas into webs of truths, and molding their minds into some collective intelligence.
Yet, as creativity comes with agility, knowledge workflows should give brains enough breathing space lest they get more pressure than pasture.
Collaboration & Thinking Flows
Collaboration is a means to an end. To be of any use exchanges have to be fed with renewed ideas and assumptions, triggering arguments and adjustments, and opening new perspectives. If not they may burn themselves out with hollow considerations blurring clues and expectations, clogging the channels, and finally stemming the thinking flows.
Taking example from lean manufacturing, the first objective should be to streamline knowledge workflows as to eliminate swirling pools of squabbles, drain stagnant puddles of stale thoughts, and gear collaboration to flowing knowledge streams. As illustrated by flood irrigation, the first step is to identify basin levels.
Dunbar Numbers & Collaboration Basins
Studying the grooming habits of social primates, psychologist Robin Dunbar came to the conclusion that the size of social circles that individuals of a living species can maintain is set by the size of brain’s neocortex. Further studies have confirmed Dunbar’s findings, with the corresponding sizes for humans set around 10 for trusted personal groups and 150 for untried social ones. As it happens, and not by chance, those numbers seem to coincide with actual observations: the former for personal and direct collaboration, the latter for social and mediated collaboration.
Based on that understanding, the objective would be to organize knowledge workflows across two primary basins:
On-site and face-to-face collaboration with trusted co-workers. Corresponding interactions would be driven by personal dispositions and attitudes.
On-line and networked collaboration with workers, trusted or otherwise. Corresponding interactions would be based on shared interests and past exchanges.
The aim of knowledge workflows is to process data into information and put it to use. That is to be achieved by combining different kinds of tasks, in particular:
Data and information management: build the symbolic descriptions of contexts, concerns, and means.
Objectives management: based on a set of symbolic descriptions, identify and refine opportunities together with the ways to realize them.
Tasks management: allocate rights and responsibilities across organizations and collaboration frames, public and shallow or personal and deep.
Flows management: monitor and manage actual flows, publish arguments and propositions, consolidate decisions, …
Taking into account constraints and dependencies between the tasks, the aims would be to balance creativity and automation while eliminating superfluous intermediate products (like documents or models) or activities (e.g unfocused meetings).
With regard to dependencies, KM tasks are often intertwined and cannot be carried out sequentially; moreover, as illustrated by the impact of “creative accounting” on accounted activities, their overlapping is not frozen but subject to feedback, changes and adjustments.
With regard to automation, three groups are to be considered: the first requires only raw processing power and can be fully automated; the second also involves some intelligence that may be provided by smart systems; and the third calls for decision-making that can only be done by human agents entitled by the organization.
At first sight some lessons could be drawn from lean manufacturing, yet, since knowledge processes are not subject to hardware constraints, agile approaches should provide a more informative reference.
Iterative Knowledge Processing
A simple preliminary step is to check the applicability of agile principles by replacing “software” by “knowledge”. Assuming that ground is secured, the core undertaking is to consider what would become of cycles and iterations when applied to knowledge processing:
Cycle invariants: tasks would be iterated on given sets of symbolic descriptions applied to the state of affairs (contexts, concerns, and means).
Iterations content: based on those descriptions data would be processed into information, changes would be monitored, and possibilities explored.
Exit condition: cycles would complete with decisions committing changes in the state of affairs that would also entail adjustments or changes in symbolic descriptions.
That scheme meets three of the basic tenets of the agile paradigm, i.e open scope (unknowns cannot be set in advance), continuity of delivery (invariants are defined and managed by knowledge workers), and users in driving seats (through exit conditions). Yet it still doesn’t deal with creativity and the benefits of collaboration for knowledge workers.
Thinking Space & Pace
The scope of creativity in processes is neatly circumscribed by the nature of flows, i.e the possibility to insert knowledge during the processing: external for material flows (e.g in manufacturing), internal for symbolic flows (e.g in software engineering and knowledge processing).
Yet, whereas both software engineering and knowledge processes come with some built-in capability to redefined their symbolic flows on-the-fly, they don’t grant the same room to creativity. Contrary to software engineering projects which have to close their perspectives on the delivery of working products, knowledge processes are meant to keep them open to new understandings and opportunities. For the former creativity is the means to an end, for the latter it’s the end in itself, with collaboration as means.
Such opposite perspectives have direct consequences for two basic agile collaboration mechanisms: backlog and time-boxing:
Backlogs are used to structure and manage the space under exploration. But contrary to software processes whose space is focused and structured by users’ needs, knowledge processes are supposed to play on workers’ creativity to expand and redefine the range under consideration.
Time-boxes are used to synchronize tasks. But with creativity entering the fray, neither space granularity or thinking pace can be set in advance and coerced into single-sized boxes. In that case individuals must remain in full control of the contents and stride of their thinking streams.
It ensues that when creativity is the primary success factor standard agile collaboration mechanisms are falling short and intelligent collaboration schemes are to be introduced.
Creativity & Collaboration Tiers
The synchronization of creative activities has to deal with conflicting objectives:
On one hand the mental maps of knowledge workers and the stream of their thoughts have to be dynamically aligned.
On the other hand unsolicited face-to-face interactions or instant communications may significantly impair the course of creative thinking.
When activities, e.g software engineering, can be streamlined towards the delivery of clearly defined outcomes, backlogs and time-boxes can be used to harness workers’ creativity. When that’s not the case more sophisticated collaboration mechanisms are needed.
Assuming that mediated collaboration has a limited impact on thinking creativity (emails don’t have to be answered, or even presented, instantly), the objective is to steer knowledge workflows across a two-tiered collaboration framework: one personal and direct between knowledge workers, the other social and mediated through enterprise or institutional networks.
On the first tier knowledge workers would manage their thinking flows (content and tempo) independently, initiating or accepting personal collaboration (either through physical contact or some kind of instant messaging) depending on their respective “state of mind”.
The second tier would be for social collaboration and would be expected to replace backlogs and time-boxing. Proceeding from the first to the second tier would be conditioned by workers’ needs and expectations, triggered on their own initiative or following prompts.
From Personal to Collective Thinking
The challenging issue is obviously to define and implement the mechanisms governing the exchanges between collaboration tiers, e.g:
How to keep tabs on topics and contents to be safeguarded.
How to mediate (i.e filter and time) the solicitations and contribution issued by the social tier.
How to assess the solicitations and contribution issued by individuals.
How to assess and manage knowledge deemed to remain proprietary.
How to identify and manage knowledge workers personal and social circles.
Whereas such issues are customary tackled by various AI systems (knowledge management, decision-making, multi-players games, etc), taken as a whole they bring up the question of the relationship between personal and collective thinking, and as a corollary, the role of organization in nurturing corporate innovation.
Conclusion: Collaboration Spaces vs Panopticon
As illustrated by the rising of futuristic headquarters, leading technology firms have been trying to tackle these issues by redefining internal architecture as collaboration spaces. Compared to traditional open spaces, such approaches try to fuse physical and digital spaces into overlapping layers of collaboration spaces, using artificial intelligence to harness cooperation.
Yet, lest uniform and comprehensive transparency brings the worrying shadow of a panopticon within which everyone can be unknowingly observed, working spaces have to be designed as to enhance collaboration without trespassing on privacy.
That could be achieved with a layered transparency set along the nature of collaboration:
Immediate and personal: working cells regrouping 5 to 10 workstations earmarked for a task and used indifferently by teams members.
Delayed and personal: open physical spaces accommodating working cells, with instant messaging and geo-localization; spaces are hinged on domains and focused on shared knowledge.
On-line and networked: digital spaces merging physical spaces and organizational structures.
That mix of physical and virtual spaces could be dynamically redefined depending on activities, projects, location, and organisation.
At enterprise level agility can be understood as a mix of versatility and plasticity, the former an attribute of function, the latter of form:
Versatility: enterprise ability to adapt business processes to changing environments without having to change architectures.
Plasticity: enterprise ability to change architectures without affecting business processes.
Combining versatility and plasticity requires a comprehensive and consistent view of assets (architectures) and modus operandi (processes) organized with regard to change. And that can be achieved with model based systems engineering (MBSE).
MBSE & Change
Agility is all about change, and if enterprise governance is not to be thrown aside decision-making has to be supported by knowledgeable descriptions of enterprise objectives, assets, and organization.
If change management is to be the primary objective, targets must be classified along two main distinctions:
Actual (business context and organization) or symbolic (information systems).
Objects (business entities or system surrogates) or activities (business processes or logic).
The two axis determine four settings supporting transparency and traceability:
Dependencies between operational and structural elements.
Dependencies between actual assets and processes and their symbolic representation as systems surrogates.
Versatility and plasticity will be obtained by managing changes and alignments between settings.
Changes & Alignments
Looking for versatility, changes in users’ requirements must be rapidly taken into account by applications (changes from actual to symbolic).
Looking for plasticity, changes in business objectives are meant to be supported by enterprise capabilities (changes from operational to structural).
The challenge is to ensure that both threads can be weaved together into business functions and realized by services (assuming a service oriented architecture).
With the benefits of MBSE, that could be carried out through a threefold alignment:
At users level the objective is to ensure that applications are consistent with business logic and provide the expected quality of service. That is what requirements traceability is meant to achieve.
At system level the objective is to ensure that business functions and features can be directly mapped to systems functionalities. That is what services oriented architectures (SOA) are meant to achieve.
At enterprise level the objective is to ensure that the enterprise capabilities are congruent with its business objectives, i.e that they support its business processes through an effective use of assets. That is what maturity and capability models are meant to achieve.
That would make agility a concrete endeavor across enterprise, from business users and applications to business processes and architectures capabilities.