As it’s the case of every measurement, software engineering metrics must be defined by clear targets and purposes, and using them shouldn’t affect neither of them.
On that account, a clear distinction should be maintain between business value (set independently of supporting systems), the size and complexity of functionalities, and the work effort needed for their development. As far as systems are concerned, the Function Points approach can be defined with regard to the nature of requirements (business or system), and their scope (primary for artifact, adjustment for architecture):
Measures of business requirements are based on intrinsic domain complexity (domains function points, or DFP), adjusted for activities (adjustment function point, or AFP); they are set at artifact level independently of operational constraints or supporting systems.
Business requirements metrics are added up and adjusted for operational constraints.
Functional requirements measures target the subset of business requirements meant to be supported by systems. As such they are best defined at use case level (use case function points (UCFP).
Metrics for quality of service may be specific to functionalities or contingent on architectures and operational constraints.
Whatever the difficulties of implementation, function points remain the only principled approach to software and systems assessment, and consequently to reliable engineering costs/benefits analysis and planning.
The seamless integration of enterprise systems into digital business environments calls for a resetting of value chains with regard to enterprise architectures, and more specifically supporting assets.
Concerning value chains, the traditional distinction between primary and supporting activities is undermined by the generalization of digital flows, rapid changes in business environments, and the ubiquity of software agents. As for assets, the distinction could even disappear due to the intertwining of tangibles resources with organization, information, and knowledge .
These difficulties could be overcome by bypassing activities and drawing value chains directly between business processes and systems capabilities.
From Activities to Processes
In theory value chains are meant to track down the path of added value across enterprise architectures; in practice their relevancy is contingent on specificity: fine when set along silos, less so if set across business functions. Moreover, value chains tied to static mappings of primary and support activities risk losing their grip when maps are redrawn, which is bound to happen more frequently with digitized business environments.
These shortcomings can be fixed by replacing primary activities by processes and support ones by system capabilities, and redefining value chains accordingly.
From Processes to Functions & Capabilities
Replacing primary and support activities with processes and functions doesn’t remove value chains primary issue, namely their path along orthogonal dimensions.
That’s not to say that business processes cannot be aligned with self-contained value chains, but insofar as large and complex enterprises are concerned, value chains are to be set across business functions. Thus the benefit of resetting the issue at enterprise architecture level.
Borrowing EA description from the Zachman framework, the mapping of processes to capabilities is meant to be carried out through functions, with business processes on one hand, architectures capabilities on the other hand.
If nothing can be assumed about the number of functions or the number of crossed processes, EA primary capabilities can be clearly identified, and functions classified accordingly, e.g: boundaries, control, entities, computation. That classification (non exclusive, as symbolized by the crossed pentagons) coincides with that nature of adjustments induced by changes in business environments:
Diversity and flexibility are to be expected for interfaces to systems’ clients (users, devices, or other systems) and triggering events, as to tally with channels and changes in business and technology environments.
Continuity is critical for the identification and semantics of business objects whose consistency and integrity have to be maintained along time independently of users and processes.
In between, changes in processes control and business logic should be governed by business opportunities independently of channels or platforms.
Processes, primary or otherwise, would be sliced according to the nature of supporting capabilities e.g: standalone (a), real-time (b), client-server (c), orchestration service (d), business rules (e), DB access (f).
Value chains could then be attached to business processes along these functional guidelines.
Tying Value Chains to Processes
Bypassing activities is not without consequences for the meaning of value chains as the original static understanding is replaced by a dynamic one: since value chains are now associated to specific operations, they are better understood as changes than absolute level. That semantic shift reflects the new business environment, with manufacturing and physical flows having been replaced by mixed (SW and HW) engineering and digital flows.
Set in a broader economic perspective, the new value chains could be likened to a marginal version of returns on capital (ROC), i.e the delta of some ratio between value and contributing assets.
Digital business environments may also made value chains easier to assess as changes can be directly traced to requirements at enterprise level, and more accurately marked across systems functionalities:
Logical interfaces (users or systems): business value tied to interactions with people or other systems.
Physical interfaces (devices): business value tied to real-time interactions.
Business logic: business value tied to rules and computations.
Information architecture: business value tied to systems information contents.
Processes architecture: business value tied to processes integration.
Platform configurations: business value tied to resources deployed.
The next step is to frame value chains across enterprise architectures in order to map values to contributing assets.
Assets & Organization
Value chains are arguably of limited use without weighting assets contribution. On that account, a major (if underrated) consequence of digital environments is the increasing weight of intangible assets brought about by the merge of actual and information flows and the rising importance of economic intelligence.
For value chains, that shift presents a double challenge: first because of the intrinsic difficulty of measuring intangibles, then because even formerly tangible assets are losing their homogeneity.
Redefining value chains at enterprise architecture level may help with the assessment of intangibles by bringing all assets, tangible or otherwise, into a common frame, reinstating organization as its nexus:
From the business perspective, that framing restates the primacy of organization for the harnessing of IT benefits.
From the architecture perspective, the centrality of organization appears when assets are ranked according to modality: symbolic (e.g culture), physical (e.g platforms), or a combination of both.
On that basis enterprise organization can be characterized by what it supports (above) and how it is supported (below). Given the generalization of digital environments and business flows, one could then take organization and information systems as proxies for the whole of enterprise architecture and draw value chains accordingly.
Value Chains & Assets
Trendy monikers may differ but information architectures have become a key success factor for enterprises competing in digital environments. Their importance comes from their ability to combine three basic functions:
Mining the continuous flows of relevant and up-to-date data.
Analyzing and transforming data, feeding the outcome to information systems
Putting that information to use in operational and strategic decision-making processes.
A twofold momentum is behind that integration: with regard to feasibility, it can be seen as a collateral benefit of the integration of actual and digital flows; with regard to opportunity, it can give a decisive competitive edge when fittingly carried through. That makes information architecture a reference of choice for intangible assets.
Insofar as enterprise architecture is concerned, value chains can then be threaded through three categories of assets:
Digital environments and the ubiquity of software in business processes introduces a new perspective on value chains and the assessment of supporting applications.
At the same time, as software designs cannot be detached of architectures capabilities, the central question remains of allocating costs and benefits between primary and support activities .
Value Chains & Activities
The concept of value chain introduced by Porter in 1985 is meant to encompass the set of activities contributing to the delivery of a valuable product or service for the market.
Taking from Porter’s generic model, various value chains have been refined according to business specific categories for primary and support activities.
Whatever their merits, these approaches are essentially static and fall short when the objective is to trace changes induced by business developments; and that flaw may become critical with the generalization of digital business environments:
Given the role and ubiquity of software components (not to mention smart ones), predefined categories are of little use for impact analysis.
When changes in value chains are considered, the shift of corporate governance towards enterprise architecture puts the focus on assets contribution, cutting down the relevance of activities.
Hence the need of taking into account changes, software development, and enterprise architectures capabilities.
Value Added & Software Development
While the growing interest for value chains in software engineering is bound to agile approaches and business driven developments, the issue can be put in the broader perspective of project planning.
With regard to assessment,stakeholders, start with business opportunities and look at supporting systems from a black box perspective; in return, software providers are to analyze requirements from a white box perspective, and estimate corresponding development effort and time delivery.
Assuming transparency and good faith, both parties are meant to eventually align expectations and commitments with regard to features, prices, and delivery.
With regard to policies, stakeholders put the focus on returns on investment (ROI), obtained from total cost of ownership, quality of service, and timely delivery. Providers for their part try to minimize development costs while taking into account effective use of resources and costs of opportunities. As it happens, those objectives may be carried on as non-zero sum games:
Business stakeholders foretell the actualized returns (a) to be expected from the functionalities under consideration (b).
Providers consider the solutions (b’) and estimate actualized costs (a’).
Stakeholders and providers agree on functionalities, prices and deliveries (c).
Assuming that business and engineering environments are set within different time-frames, there should be room for non-zero-sum games winding up to win-win adjustments on features, delivery, and prices.
Continuous vs Phased Alignments
Notwithstanding the constraints of strategic planning, business processes are by nature opportunistic, and their ability to be adjusted to circumstances is becoming all the more critical with the generalization of digital business environments.
Broadly speaking, the squaring of supporting applications to business value can be done continuously or by phases:
Phased alignments start with some written agreements with regard to features, delivery, and prices before proceeding with development phases.
Continuous alignment relies on direct collaboration and iterative development to shape applications according to business needs.
Beyond sectarian controversies, each approach has its use:
Continuous schemes are clearly better at harnessing value chains, providing that project teams be allowed full project ownership, with decision-making freed of external dependencies or delivery constraints.
Phased schemes are necessary when value chains cannot be uniquely sourced as they take roots in different organizational units, or if deliveries are contingent on technical constraints.
In any case, it’s not a black-and-white alternative as work units and projects’ granularity can be aligned with differentiated expectations and commitments.
Work Units & Architecture Capabilities
While continuous and phased approaches are often opposed under the guises of Agile vs Waterfall, that understanding is misguided as it extends the former to a motley of self-appointed agile schemes and reduces the latter to an ill-famed archetype.
Instead, a reasoned selection of a development models should be contingent on the problems at hand, and that can be best achieved by defining work-units bottom-up with regard to the capabilities targeted by requirements:
Development patterns could then be defined with regard to architecture layers (organization and business, systems functionalities, platforms implementations) and capabilities footprint:
Phased: work units are aligned with architecture capabilities, e.g : business objects (a), business logic (b), business processes (c), users interfaces (d).
Iterative: work units are set across capabilities and defined dynamically according to development problems.
That would provide a development framework supporting the assessment of iterative as well as phased projects, paving the way for comprehensive and integrated impact analysis.
Value Chains & Architecture Capabilities
As far as software engineering is concerned, the issue is less the value chain itself than its change, namely how value is to be added along the chain.
To summarize, the transition to this layout is carried out in two steps:
Conceptually, Zachman’s original “Why” column is translated into a line running across column capabilities.
Graphically, the five remaining columns are replaced by embedded pentagons, one for each architecture layer, with the new “Why” line set as an outer layer linking business value to architectures capabilities:
That apparently humdrum transformation entails a significant shift in focus and practicality:
The focus is put on organizational and business objectives, masking the ones associated to systems and platforms layers.
It makes room for differentiated granularity in the analysis of value, some items being anchored to specific capabilities, others involving cross dependencies.
Value chains can then be charted from business processes to supporting architectures, with software applications in between.
As pointed above, the crumbling of traditional fences and the integration of enterprise architectures into digital environments undermine the traditional distinction between primary and support activities.
To be sure, business drive is more than ever the defining factor for primary activities; and computing more than ever the archetype of supporting ones. But in between the once clear-cut distinctions are being blurred by a maze of digital exchanges.
In order to avoid a spaghetti heap of undistinguished connections, value chains are to be “colored” according to the nature of links:
Between architectures capabilities: business and organization (enterprise), systems functionalities, or platforms and technologies.
Between architecture layers: engineering processes.
When set within that framework, value chains could be navigated in both directions:
For the assessment of applications developed iteratively: business value could be compared to development costs and architecture assets’ depreciation.
For the assessment of features (functional or non functional) to be shared across business applications: value chains will provide a principled basis for standard accounting schemes.
Combined with model based system engineering that could significantly enhance the integration of enterprise architecture into corporate governance.
Computation independent models (CIMs) describe organization and business processes independently of the role played by supporting systems.
Platform independent models (PIMs) describe the functionalities supported by systems independently of their implementation.
Platform specific models (PSMs) describe systems components depending on platforms and technologies.
Engineering processes can then be phased along architecture layers (a), or carried out iteratively for each application (b).
When set across activities value chains could be engraved in CIMs and refined with PIMs and PSMs(a). Otherwise, i.e with business value neatly rooted in single business units, value chains could remain implicit along software development (b).
Models are meant to characterize categories of given (descriptive models), designed (prescriptive models), or hypothetical (predictive models) individuals.
Insofar as purposes can be kept apart, the discrepancies in targeted individuals can be ironed out, notably by using power-types. Otherwise, e.g if modeling concerns mix business analysis with software engineering, meta-models are introduced as jack-of-all-trades to deal with mixed semantics.
But meta-models generate exponential complexity when used across domains, not to mention the open and fuzzy ones of business intelligence. What is at stake can be better understood through the way individuals are identified and represented.
Partitions & Abstraction
Since models are meant to classify instances with regard to concerns, mixing concerns is to entail mixed classifications, horizontally across domains (e.g business and accounting), or vertically along engineering cycles (e.g business and engineering).
That can be achieved with power-types, meta-classes, or ontologies.
Delegation & Power-types
Given that categories (or classes or types) represent set of instances (given, designed, or simulated), they may by themselves be regarded as symbolic instances used to manage features commonly valued by their own members.
This approach is consistent as well as effective providing the semantics and identities of instances are shared by all agents concerned, e.g business and technical aspects of car rentals. Yet it falls short on both accounts when abstractions levels a set across domains, inducing connectors with different semantics and increased complexity.
Abstraction & Sub-classes
Sub-classes often appear as a way to overcome the difficulty, as illustrated by the Hepp Research’s Vehicle Sales Ontology: despite being set at different abstraction levels, instances for cars and models are defined and identified uniformly.
If, in that case, the model fulfills the substitution principle for external consistency, sub-types will fall short if engineering concerns were to be taken into account because the set of individual car models could then differ depending on perspective.
Meta-classes & Stereotypes
The overuse of meta-classes and stereotypes epitomizes the escapism school of modeling. That may be understandable, if not helpful, for the former which is by nature a free pass to abstraction; less so for the latter which is supposed to go the other way toward the specialization of meta-classes according to specific profiles. It ensues that stereotypes should never be used on their own or be extended by another stereotype.
As it happens, such consistency concerns appear to be easily diluted when stereotypes are jumbled with meta constructs; e.g:
Abstract stereotype (a).
Strong (aka class) inheritance of abstract stereotype from concrete meta-class (b).
Weak inheritance (aka aspect) between stereotypes (c, f).
Meta-constraint used to map Vehicle to methodology stereotype (d).
Domain specific connector to stereotype (e).
Without comprehensive and consistent semantics for instances and abstractions, individuals cannot be solidly mapped to models:
Individual concepts (car, horse, boat, …) have no clear mooring.
Actual vehicles cannot be tied to the meta-class or the abstract stereotype.
There is no strong inheritance tying individuals models and rental cars to an identification mechanism.
Assuming that detachment is not an option, the basis of models must be reset.
Ontologies & open concepts
Put in simple words, ontologies are meant to examine the nature and categories of existence, in general (metaphysics) or in specific contexts. As for the latter, they can be applied to individuals according to the nature of their existence (aka epistemic identity): concepts, documents, categories and aspects.
As it happens, sorting things out makes the whole paraphernalia of meta-classes and stereotypes no longer needed because the semantics of inheritance and associations can be set according to the nature of individuals.
Then, individuals could be used to map models to past and future, the former for refactoring, the latter for business intelligence.
Refactoring looks to the pastbut is frustrated by undocumented legacy code. Combined with machine learning, individuals could help to bridge the gap between code and models.
Business Intelligence looks the other wayas it is meant to map hypothetical business objects and behaviors to the structures and semantics already managed by information systems. As in reversal of legacy benefits, individuals could provide cues to new business meanings.
Conversations across software engineering forums sometimes reveal unexpected views, as it’s the case for the benefits of accountability.
One would assume that competition would impel enterprises to scrutiny with regard to resources employed and product outcomes, pushing for the assessment of internal activities based on some agreed metrics. And yet, now and again, software development is viewed as a boutique occupation, if not an art pursuit, carried out by creative craftsmen for enlightened if demanding patrons; a vocation too distinctive to be gauged by common yardsticks.
Difficulties of Oversight
Setting apart creative delusions, the assessment of software development is effectively confronted with rational as well as practical obstacles.
To begin with rationality, and unlike traditional products, there is no market pricing mechanism that could match software development costs with customers’ value. As a consequence business stakeholders and systems engineers prefer to play safe and keep their respective assessments on the opposed banks of the customer/provider divide.
As for the practicality of assessments, the choice is between idiosyncratic approaches (e.g users’ points) and reasoned ones (essentially function points). The former ones being by nature specific and subject to changes in business opportunities, whereas the latter ones are being plagued by implementation plights that make them both costly and unreliable.
Yet, the diluting of IT systems in business environments is making that conundrum irrelevant: the fusing of business processes and supporting software is blanketing the discontinuities between business value and development costs.
Perils of Oversight
Given the digital integration between systems and business environments and the part played by software in production, marketing and operations, enterprises can no longer ignore the economics of software development.
As far as enterprises are concerned, economics use prices for two key purposes, external and internal.
With regard to their business environment, enterprises need metrics to price the resources they could buy and the products they could sell; their competitive edge fully depends on the thoroughness and accuracy of both.
With regard to their internal governance, enterprises need metrics to gauge the efficiency of their factors and the maturity of their processes, and allocate resources accordingly. That internal assessment is the basis of their versatility and plasticity:
Confronted to continuous, frequent, and often abrupt changes in business environments, enterprise must be able to adapt their activities without having to change its architectures. That cannot be achieved without timely and accurate assessments of the way their resources are put to use.
Conversely, enterprises may have to change their architectures without affecting their performances; that cannot be achieved without a comprehensive and accurate assessments of alternative options, organizational as well as technical.
To summarize, the spread and intricacy of software footprint over both sides of the crumbling fences between enterprise systems and business environments makes software economics a necessary component of enterprises governance, so a tally of software beans should not be an option.
As far as standards go, the more they are, the less they’re worth.
What have we got
Assuming that modeling languages are meant to build abstractions, one would expect their respective ladders converging somewhere up in some conceptual or meta cloud.
Assuming that standards are meant to introduce similarities into diversity, one would expect clear-cut taxonomies to be applied to artifacts designs.
Instead one will find bounty of committees, bloated specifications, and an open-minded if clumsy language confronted to a number of specific ones.
What is missing
Given the constitutive role of mathematical logic in computing systems, its quasi absence in modeling methods of their functional behavior is dumbfounding. Formal logic, set theory, semiotics, name it, every aspect of systems modeling can rely on a well established corpus of concepts and constructs. And yet, these scientific assets may be used in labs for research purposes but they remain overlooked for any practical use; as if the laser technology had been kept out of consumers markets for almost a century.
What should be done
The current state of affairs can be illustrated by a Horse vs Zebra metaphor: the former with a long and proved track record of varied, effective and practical usages, the latter with almost nothing to its credit except its archetypal idiosyncrasy.
Like horses, logic can be harnessed or saddled to serve a wide range of purposes without loosing anything of its universality. By contrast, concurrent standards and modeling languages can be likened to zebras: they may be of some use for their owner, but from an outward perspective, what remains is their distinctive stripes.
So the way out of the conundrum seems obvious: get rid of the stripes and put back the harness of logic on all the modeling horses.
What Can Be Done
Frameworks are meant to promote consensus and establish clear and well circumscribed common ground; but the whopping range of OMG’s profiles and frameworks doesn’t argue in favor of meta-models.
Ontologies by contrast are built according to the semantics of domains and concerns. So whereas meta-models have to mix lexical, syntactic, and semantic constructs, ontologies can be built on well delineated layers.
A clear-cut distinction between truth-preserving representation and domain specific semantics.
Profiled ontologies designed according to the nature of contents (concepts, documents, or artifacts), layers (environment, enterprise, systems, platforms), and contexts (institutional, professional, corporate, social.
A Protégé/OWL 2 implementation ensures portability and interoperability. A beta version is available for comments on the Stanford/Protégé portal with the link: Caminao Ontological Kernel (CaKe).
Open-plan offices and social networks are often seen as significant factors of collaboration and innovation, breeding and nurturing the creativity of knowledge workers, weaving their ideas into webs of truths, and molding their minds into some collective intelligence.
Yet, as creativity comes with agility, knowledge workflows should give brains enough breathing space lest they get more pressure than pasture.
Collaboration & Thinking Flows
Collaboration is a means to an end. To be of any use exchanges have to be fed with renewed ideas and assumptions, triggering arguments and adjustments, and opening new perspectives. If not they may burn themselves out with hollow considerations blurring clues and expectations, clogging the channels, and finally stemming the thinking flows.
Taking example from lean manufacturing, the first objective should be to streamline knowledge workflows as to eliminate swirling pools of squabbles, drain stagnant puddles of stale thoughts, and gear collaboration to flowing knowledge streams. As illustrated by flood irrigation, the first step is to identify basin levels.
Dunbar Numbers & Collaboration Basins
Studying the grooming habits of social primates, psychologist Robin Dunbar came to the conclusion that the size of social circles that individuals of a living species can maintain is set by the size of brain’s neocortex. Further studies have confirmed Dunbar’s findings, with the corresponding sizes for humans set around 10 for trusted personal groups and 150 for untried social ones. As it happens, and not by chance, those numbers seem to coincide with actual observations: the former for personal and direct collaboration, the latter for social and mediated collaboration.
Based on that understanding, the objective would be to organize knowledge workflows across two primary basins:
On-site and face-to-face collaboration with trusted co-workers. Corresponding interactions would be driven by personal dispositions and attitudes.
On-line and networked collaboration with workers, trusted or otherwise. Corresponding interactions would be based on shared interests and past exchanges.
The aim of knowledge workflows is to process data into information and put it to use. That is to be achieved by combining different kinds of tasks, in particular:
Data and information management: build the symbolic descriptions of contexts, concerns, and means.
Objectives management: based on a set of symbolic descriptions, identify and refine opportunities together with the ways to realize them.
Tasks management: allocate rights and responsibilities across organizations and collaboration frames, public and shallow or personal and deep.
Flows management: monitor and manage actual flows, publish arguments and propositions, consolidate decisions, …
Taking into account constraints and dependencies between the tasks, the aims would be to balance creativity and automation while eliminating superfluous intermediate products (like documents or models) or activities (e.g unfocused meetings).
With regard to dependencies, KM tasks are often intertwined and cannot be carried out sequentially; moreover, as illustrated by the impact of “creative accounting” on accounted activities, their overlapping is not frozen but subject to feedback, changes and adjustments.
With regard to automation, three groups are to be considered: the first requires only raw processing power and can be fully automated; the second also involves some intelligence that may be provided by smart systems; and the third calls for decision-making that can only be done by human agents entitled by the organization.
At first sight some lessons could be drawn from lean manufacturing, yet, since knowledge processes are not subject to hardware constraints, agile approaches should provide a more informative reference.
Iterative Knowledge Processing
A simple preliminary step is to check the applicability of agile principles by replacing “software” by “knowledge”. Assuming that ground is secured, the core undertaking is to consider what would become of cycles and iterations when applied to knowledge processing:
Cycle invariants: tasks would be iterated on given sets of symbolic descriptions applied to the state of affairs (contexts, concerns, and means).
Iterations content: based on those descriptions data would be processed into information, changes would be monitored, and possibilities explored.
Exit condition: cycles would complete with decisions committing changes in the state of affairs that would also entail adjustments or changes in symbolic descriptions.
That scheme meets three of the basic tenets of the agile paradigm, i.e open scope (unknowns cannot be set in advance), continuity of delivery (invariants are defined and managed by knowledge workers), and users in driving seats (through exit conditions). Yet it still doesn’t deal with creativity and the benefits of collaboration for knowledge workers.
Thinking Space & Pace
The scope of creativity in processes is neatly circumscribed by the nature of flows, i.e the possibility to insert knowledge during the processing: external for material flows (e.g in manufacturing), internal for symbolic flows (e.g in software engineering and knowledge processing).
Yet, whereas both software engineering and knowledge processes come with some built-in capability to redefined their symbolic flows on-the-fly, they don’t grant the same room to creativity. Contrary to software engineering projects which have to close their perspectives on the delivery of working products, knowledge processes are meant to keep them open to new understandings and opportunities. For the former creativity is the means to an end, for the latter it’s the end in itself, with collaboration as means.
Such opposite perspectives have direct consequences for two basic agile collaboration mechanisms: backlog and time-boxing:
Backlogs are used to structure and manage the space under exploration. But contrary to software processes whose space is focused and structured by users’ needs, knowledge processes are supposed to play on workers’ creativity to expand and redefine the range under consideration.
Time-boxes are used to synchronize tasks. But with creativity entering the fray, neither space granularity or thinking pace can be set in advance and coerced into single-sized boxes. In that case individuals must remain in full control of the contents and stride of their thinking streams.
It ensues that when creativity is the primary success factor standard agile collaboration mechanisms are falling short and intelligent collaboration schemes are to be introduced.
Creativity & Collaboration Tiers
The synchronization of creative activities has to deal with conflicting objectives:
On one hand the mental maps of knowledge workers and the stream of their thoughts have to be dynamically aligned.
On the other hand unsolicited face-to-face interactions or instant communications may significantly impair the course of creative thinking.
When activities, e.g software engineering, can be streamlined towards the delivery of clearly defined outcomes, backlogs and time-boxes can be used to harness workers’ creativity. When that’s not the case more sophisticated collaboration mechanisms are needed.
Assuming that mediated collaboration has a limited impact on thinking creativity (emails don’t have to be answered, or even presented, instantly), the objective is to steer knowledge workflows across a two-tiered collaboration framework: one personal and direct between knowledge workers, the other social and mediated through enterprise or institutional networks.
On the first tier knowledge workers would manage their thinking flows (content and tempo) independently, initiating or accepting personal collaboration (either through physical contact or some kind of instant messaging) depending on their respective “state of mind”.
The second tier would be for social collaboration and would be expected to replace backlogs and time-boxing. Proceeding from the first to the second tier would be conditioned by workers’ needs and expectations, triggered on their own initiative or following prompts.
From Personal to Collective Thinking
The challenging issue is obviously to define and implement the mechanisms governing the exchanges between collaboration tiers, e.g:
How to keep tabs on topics and contents to be safeguarded.
How to mediate (i.e filter and time) the solicitations and contribution issued by the social tier.
How to assess the solicitations and contribution issued by individuals.
How to assess and manage knowledge deemed to remain proprietary.
How to identify and manage knowledge workers personal and social circles.
Whereas such issues are customary tackled by various AI systems (knowledge management, decision-making, multi-players games, etc), taken as a whole they bring up the question of the relationship between personal and collective thinking, and as a corollary, the role of organization in nurturing corporate innovation.
Conclusion: Collaboration Spaces vs Panopticon
As illustrated by the rising of futuristic headquarters, leading technology firms have been trying to tackle these issues by redefining internal architecture as collaboration spaces. Compared to traditional open spaces, such approaches try to fuse physical and digital spaces into overlapping layers of collaboration spaces, using artificial intelligence to harness cooperation.
Yet, lest uniform and comprehensive transparency brings the worrying shadow of a panopticon within which everyone can be unknowingly observed, working spaces have to be designed as to enhance collaboration without trespassing on privacy.
That could be achieved with a layered transparency set along the nature of collaboration:
Immediate and personal: working cells regrouping 5 to 10 workstations earmarked for a task and used indifferently by teams members.
Delayed and personal: open physical spaces accommodating working cells, with instant messaging and geo-localization; spaces are hinged on domains and focused on shared knowledge.
On-line and networked: digital spaces merging physical spaces and organizational structures.
That mix of physical and virtual spaces could be dynamically redefined depending on activities, projects, location, and organisation.