EA Capacity & Maturity

Preamble

Appraising enterprises capability and maturity means navigating between the fuzzy depths of business models and the marked features of supporting systems.

Keeping in touch with environment (Urs Fischer)

Business capabilities are by nature a fleeting lot to assess, considering the innate diversity and volatility of circumstances and the fact that successes belong to exception more than rule. By contrast, the assessment of systems capabilities is much easier as they can be defined in architectural terms.

Digital transformation may help to solve the dilemma by dissolving it: given the merging of systems and organization, the key success factor for enterprises is their capacity to assess changes and opportunities and adjust their processes and architectures accordingly. On that account, performances are to depend on the dynamic alignment of enterprises representations (maps) with their business and technological environments (territories).

enterprise architecture & environments

Based on a broadly accepted architecture paradigm epitomised by the Zachman framework, and figured by the Pagoda blueprint, enterprise environments and architectures are to be defined along three levels (aka tiers or layers):

Enterprise must align their maps and territories
  • Data level, for digital environment, operations, and platforms; described by physical and analytical models.
  • Information level, for systems and engineering, described by logical and functional models.
  • Knowledge level, for business objectives and organization; described by conceptual and process models.

That framework can be used to assess enterprises capacity to change independently of business specificites.

DEALING WITH CHANGES

Whereas absolute measurements are tied to valuation contexts, relative ones can be ecumenical, hence the benefit of targeting the capacity to change instead of trying to measure capacity by itself.

As far as enterprise architectures are concerned, changes can originate from business or technological environments, the former at process level, the latter at application level.

To begin with, as much change as possible should be dealt with at application level through organizational, functional, or operational adjustments, without affecting architectural assets. That is to be achieved through architectures versatility and plasticity (aka agility) .

When architectural changes are needed, their footprint can be layered in terms of Model Driven Architecture (MDA) , i.e computation independent (CIM), platform independent (PIM), and platform specific (PSM) models.

Change in Enterprise Architectures

Ideally, changes should spread top-down from computation independent models to platform specific ones, with or without affecting platform independent ones. On that account, the footprint of changes rooted in technical environment should be circumscribed to platforms adaptations and charted by platform specific models.

By contrast, changes rooted in business environment could induce changes in any or all architecture layers:

  • Platform specific (PSM), e.g when business logic is implemented by rules engines.
  • Platform independent (PIM), e.g new business functions.
  • Computation independent (CIM), e.g new business processes.

That model-based approach is to be used to define enterprises changes in terms of entropy.

Entropy & capacity to change

Change is a matter of time, especially for business, and a delicate balance is to be achieved between assessments (which improve when given time until they become redundant), and commitments (which risk missing opportunities if kept waiting for too long).

The issues can be expressed by applying the OODA (Observation, Orientation, Decision, Action) loop to system engineering:

  • Changes in business and technology environments are observed at digital (e.g data mining) or conceptual level (e.g business intelligence) (a).
  • Assessment deals with the reliability of observations as well as their meaning with regard to enterprise objectives, organization, and systems (b).
  • Policies are updated and decisions made regarding the adjustment of objectives, resources, organization, or assets (c).
  • Decisions are implemented as technical and business commitments (d).

On that basis, the capacity to change is to depend on:

  • Osmosis: quality (accuracy, reliability,…), delay, and automaticity of observations with regard to changes in environments (data mining).
  • Operational traceability across decisions, actions and observations (process mining, verification).
  • Alignment of business and digital environments (validation).
  • Consistency of architecture models (CIMs, PIMs, PSMs).
  • Value chains and lean engineering: business logic (CIMs) directly embedded in software designs (PSMs).
Capacity to Change

With the benefits of digital transformation, these dimensions should be defined in terms of information processing.

osmosis & blind spots

The generalization of digital exchanges between enterprises and their environment brings back the concept of entropy, defined by cybernetics as the quantum of energy within a system that cannot be put to use. Applied to enterprises, entropy can be understood as a blind spot on environment data, arguably a critical hindrance to their capacity to move and adjust. The primary objective should therefore to minimize that blind spot, i.e to maximize the outcome of data processing .

As digital environments bring about level data playground, to get a competitive edge enterprises have to make a better sense of it; hence the importance of a distinction between data and information, the former obtained from the environment, the latter obtained through the processing of the former. On that basis, the capacity to change, defined as the opposite of entropy, is to be determined by the way data is processed into information.

Digital osmosis, i.e the exchange of digital data between enterprises operations and environments is clearly a primary factor: inbound streams can be mined as to provide comprehensive and timely snapshots, outbound streams can be weaved into business processes, enhancing the capacity to translate decisions into action.

Digital osmosis could also bolster lean engineering with the direct integration of (digital) business logic into software (e.g using rules engines), reinforcing the shortcuts between observations and decisions whenever orientation can be avoided. That would also enhance traceability between platform specific (PSMs) and operational models, as well as the monitoring of actions and the analysis of feedbacks (process mining).

Osmosis could be easily (if not accurately) estimated with the ratio between digitized flows and the whole of data flows, with measurements weighted by delays between observations and data processing.

A taxonomy of changes

Digital and business environments are not to be confused, and there is no reason to assume that their respective changes tally. As it happens, digital transformation may reinstall organizations at the nexus of changes by providing a powerful leverage on systems; and if entropy is considered, that is to be achieved through symbolic representations, aka models.

While names may vary, a distinction is generally made between changes according to their horizon, shorter for operational or tactical ones, longer for strategic ones. As so often with quantitative classifications, that understanding has been of limited use given the diversity of time-frames across industries. The digital transformation open the door to a qualitative approach, defining changes according to the nature of of their footprint:

  • From the business perspective, it should restate the primacy of organization for the harnessing of IT benefits.
  • From the architecture perspective, it would rank assets according to “digital modality”: symbolic (information, knowledge), tangible (e.g platforms), or a combination of both (functional architecture).
Changes can be defined according to the digital modality of their footprint

Taking the strategic perspective, changes in technical or economic factors are to affect assets and processes for a large and often undetermined number of production cycles; they must consequently be set in time-frames extending beyond managed horizons (actual chronologies depend on industries specificities). Despite regular updates, supporting models and hypothesis may lose their relevance during the intervals, with the resulting entropy hampering the assessment of opportunities and policies. Such discrepancies can be circumscribed if planned organization and computation independent models (CIMs) are systematically checked against observations.

Compared to strategic ones, functional changes can be aligned with a limited number of production cycles, which irons out most of the discrepancies between maps and territories associated with strategic planning. Instead, entropy could arise from a lack of transparency and traceability between the models used to map organization and processes (CIMs) to functional (PIMs) and technical (PSMs) architectures.

Finally, operational changes can be carried out at process level, without affecting architectures. In that case entropy could be the result of a misalignment of commitments and observations granularity.

On that basis, planning and managing changes in digital and business environments should be driven by models.

ENTROPY & maturity

As defined by cybernetics, entropy can be explained by the discrepancies between environment states (aka micro-states) and their representation (macro-states). Applied to enterprises environments and architectures, it would mean territories and operations for the former, maps and policies for the latter.

On that account the maturity of an organization should be assessed with regard to its ability to manage changes within and across environments:

  • Consistency of changes within environments: between territories and operations (digital environments), between maps and policies (business environments) .
  • Validity of changes across environments: between maps and territories, and between policies and operations.
Consistency is checked within environments, validity is checked across environments

Maturity levels could then be set according to the scope of managed entropy:

  1. Digital osmosis: all exchanges with environments come with digital counterpart sorted with regard to operational data, data attached to managed information, environment data detached from managed information.
  2. Digital value chains: digital integration of business and engineering processes ensuring transparency and traceability along value chains.
  3. Organizational pivot: value chains can be redefined as to make the best of information assets.
  4. Knowledge based architectures: enterprise layers (platforms, systems, organization) are aligned with data, information, and knowledge layers, leveraging the benefits of machine learning across enterprise organization and systems.

As it’s safe to assume that scaling maturity levels across enterprise architectures can only be carried out progressively, progresses have to be backed up by a comprehensive and ecumenical repository of physical and symbolic resources and assets, and be supported by workflows combining iterative (continuous and business driven) and model-based (phased, architecture driven) engineering processes .

FURTHER READING

Squared Outline: Metrics

As it’s the case of every measurement, software engineering metrics must be defined by clear targets and purposes, and using them shouldn’t affect neither of them.

On that account, a clear distinction should be maintain between business value (set independently of supporting systems), the size and complexity of functionalities, and the work effort needed for their development. As far as systems are concerned, the Function Points approach can be defined with regard to the nature of requirements (business or system), and their scope (primary for artifact, adjustment for architecture):

  1. Measures of business requirements are based on intrinsic domain complexity (domains function points, or DFP), adjusted for activities (adjustment function point, or AFP); they are set at artifact level independently of operational constraints or supporting systems.
  2. Business requirements metrics are added up and adjusted for operational constraints.
  3. Functional requirements measures target the subset of business requirements meant to be supported by systems. As such they are best defined at use case level (use case function points (UCFP).
  4. Metrics for quality of service may be specific to functionalities or contingent on architectures and operational constraints.

Whatever the difficulties of implementation, function points remain the only principled approach to software and systems assessment, and consequently to reliable engineering costs/benefits analysis and planning.

FURTHER READING

Value Chains & EA

Preamble

The seamless integration of enterprise systems into digital business environments calls for a resetting of value chains with regard to enterprise architectures, and more specifically supporting assets.

daniel-jacoby-tecnopor
Value Chains & Support (Daniel Jacoby)

Concerning value chains, the traditional distinction between primary and supporting activities is undermined by the generalization of digital flows, rapid changes in business environments, and the ubiquity of software agents. As for assets, the distinction could even disappear due to the intertwining of tangibles resources with organization,  information, and knowledge .

These difficulties could be overcome by bypassing activities and drawing value chains directly between business processes and systems capabilities.

From Activities to Processes

In theory value chains are meant to track down the path of added value across enterprise architectures; in practice their relevancy is contingent on specificity: fine when set along silos, less so if set across business functions. Moreover, value chains tied to static mappings of primary and support activities risk losing their grip when maps are redrawn, which is bound to happen more frequently with digitized business environments.

These shortcomings can be fixed by replacing primary activities by processes and support ones by system capabilities, and redefining value chains accordingly.

SOAb_P2Caps
Substituting processes and capabilities for primary and support activities

From Processes to Functions & Capabilities

Replacing primary and support activities with processes and functions doesn’t remove value chains primary issue, namely their path along orthogonal dimensions.

That’s not to say that business processes cannot be aligned with self-contained value chains, but insofar as large and complex enterprises are concerned, value chains are to be set across business functions. Thus the benefit of resetting the issue at enterprise architecture level.

Borrowing EA description from the Zachman framework, the mapping of processes to capabilities is meant to be carried out through functions, with business processes on one hand, architectures capabilities on the other hand.

If nothing can be assumed about the number of functions or the number of crossed processes, EA primary capabilities can be clearly identified, and functions classified accordingly, e.g: boundaries, control, entities, computation. That classification (non exclusive, as symbolized by the crossed pentagons) coincides with that nature of adjustments induced by changes in business environments:

  • Diversity and flexibility are to be expected for interfaces to systems’ clients (users, devices, or other systems) and triggering events, as to tally with channels and changes in business and technology environments.
  • Continuity is critical for the identification and semantics of business objects whose consistency and integrity have to be maintained along time independently of users and processes.
  • In between, changes in processes control and business logic should be governed by business opportunities independently of  channels or platforms.

SOAb_BP2VaCh
Mapping Processes to Architectures Functions & Capabilities

Processes, primary or otherwise, would be sliced according to the nature of supporting capabilities e.g: standalone (a), real-time (b), client-server (c), orchestration service (d), business rules (e), DB access (f).

Value chains could then be attached to business processes along these functional guidelines.

Tying Value Chains to Processes

Bypassing activities is not without consequences for the meaning of value chains as the original static understanding is replaced by a dynamic one: since value chains are now associated to specific operations, they are better understood as changes than absolute level. That semantic shift reflects the new business environment, with manufacturing and physical flows having been replaced by mixed (SW and HW) engineering and digital flows.

Set in a broader economic perspective, the new value chains could be likened to a marginal version of returns on capital (ROC), i.e the delta of some ratio between value and contributing assets.

Digital business environments may also made value chains easier to assess as changes can be directly traced to requirements at enterprise level, and more accurately marked across systems functionalities:

  • Logical interfaces (users or systems): business value tied to interactions with people or other systems.
  • Physical interfaces (devices): business value tied to real-time interactions.
  • Business logic: business value tied to rules and computations.
  • Information architecture: business value tied to systems information contents.
  • Processes architecture: business value tied to processes integration.
  • Platform configurations: business value tied to resources deployed.

SOAb_P2TAssets
Marking Value Chains in terms of changes

The next step is to frame value chains across enterprise architectures in order to map values to contributing assets.

Assets & Organization

Value chains are arguably of limited use without weighting assets contribution. On that account, a major (if underrated) consequence of digital environments is the increasing weight of intangible assets brought about by the merge of actual and information flows and the rising importance of economic intelligence.

For value chains, that shift presents a double challenge: first because of the intrinsic difficulty of measuring intangibles, then because even formerly tangible assets are losing their homogeneity.

Redefining value chains at enterprise architecture level may help with the assessment of intangibles by bringing all assets, tangible or otherwise, into a common frame, reinstating organization as its nexus:

  • From the business perspective, that framing restates the primacy of organization for the harnessing of IT benefits.
  • From the architecture perspective, the centrality of organization appears when assets are ranked according to modality: symbolic (e.g culture), physical (e.g platforms), or a combination of both.

SOAb_BP2Assets
Ranking assets according to modality

On that basis enterprise organization can be characterized by what it supports (above) and how it is supported (below). Given the generalization of digital environments and business flows, one could then take organization and information systems as proxies for the whole of enterprise architecture and draw value chains accordingly.

Value Chains & Assets

Trendy monikers may differ but information architectures have become a key success factor for enterprises competing in digital environments. Their importance comes from their ability to combine three basic functions:

  1. Mining the continuous flows of relevant and up-to-date data.
  2. Analyzing and transforming data, feeding the outcome to information systems
  3. Putting that information to use in operational and strategic decision-making processes.

A twofold momentum is behind that integration: with regard to feasibility, it can be seen as a collateral benefit of the integration of actual and digital flows; with regard to opportunity, it can give a decisive competitive edge when fittingly carried through. That makes information architecture a reference of choice for intangible assets.

SOAb_P2OAssets
Assets Contribution to Value Chains

Insofar as enterprise architecture is concerned, value chains can then be threaded through three categories of assets:

  • Tangibles: operational resources supporting processes execution
  • Organization: roles, tasks, and responsibilities associated to processes
  • Intangibles: data mining, information contents, business intelligence, knowledge management, and decision-making.

That approach would simultaneously meet with the demands of digital environments, and add practical meaning to enterprise architecture as a discipline.

Further Reading

 

Squaring Software To Value Chains

Preamble

Digital environments and the ubiquity of software in business processes introduces a new perspective on value chains and the assessment of supporting applications.

Michel Blazy plante
Business & Supporting Processes (Michel Blazy)

At the same time, as software designs cannot be detached of architectures capabilities, the central question remains of allocating costs and benefits between primary and support activities .

Value Chains & Activities

The concept of value chain introduced by Porter in 1985 is meant to encompass the set of activities contributing to the delivery of a valuable product or service for the market.

porter1.png
Porter’s generic model

Taking from Porter’s generic model, various value chains have been refined according to business specific categories for primary and support activities.

Whatever their merits, these approaches are essentially static and fall short when the objective is to trace changes induced by business developments; and that flaw may become critical with the generalization of digital business environments:

  • Given the role and ubiquity of software components (not to mention smart ones), predefined categories are of little use for impact analysis.
  • When changes in value chains are considered, the shift of corporate governance towards enterprise architecture puts the focus on assets contribution, cutting down the relevance of activities.

Hence the need of taking into account changes, software development, and enterprise architectures capabilities.

Value Added & Software Development

While the growing interest for value chains in software engineering is bound to agile approaches and business driven developments, the issue can be put in the broader perspective of project planning.

With regard to assessment, stakeholders, start with business opportunities and look at supporting systems from a black box perspective; in return, software providers are to analyze requirements from a white box perspective, and estimate corresponding development effort and time delivery.

Assuming transparency and good faith, both parties are meant to eventually align expectations and commitments with regard to features, prices, and delivery.

With regard to policies, stakeholders put the focus on returns on investment (ROI), obtained from total cost of ownership, quality of service, and timely delivery. Providers for their part try to minimize development costs while taking into account effective use of resources and costs of opportunities. As it happens, those objectives may be carried on as non-zero sum games:

  • Business stakeholders foretell the actualized returns (a) to be expected from the functionalities under consideration (b).
  • Providers consider the solutions (b’) and estimate actualized costs (a’).
  • Stakeholders and providers agree on functionalities, prices and deliveries (c).

ReksRings_NZSG
Developing the value chain

Assuming that business and engineering environments are set within different time-frames, there should be room for non-zero-sum games winding up to win-win adjustments on features, delivery, and prices.

Continuous vs Phased Alignments

Notwithstanding the constraints of strategic planning, business processes are by nature opportunistic, and their ability to be adjusted to circumstances is becoming all the more critical with the generalization of digital business environments.

Broadly speaking, the squaring of supporting applications to business value can be done continuously or by phases:

  • Phased alignments start with some written agreements with regard to features, delivery, and prices before proceeding with development phases.
  • Continuous alignment relies on direct collaboration and iterative development to shape applications according to business needs.

ReksRings_PhasCont
Phased vs Continuous Adjustments

Beyond sectarian controversies, each approach has its use:

  • Continuous schemes are clearly better at harnessing value chains, providing that project teams be allowed full project ownership, with decision-making freed of external dependencies or delivery constraints.
  • Phased schemes are necessary when value chains cannot be uniquely sourced as they take roots in different organizational units, or if deliveries are contingent on technical constraints.

In any case, it’s not a black-and-white alternative as work units and projects’ granularity can be aligned with differentiated expectations and commitments.

Work Units & Architecture Capabilities

While continuous and phased approaches are often opposed under the guises of Agile vs Waterfall, that understanding is misguided as it extends the former to a motley of self-appointed agile schemes and reduces the latter to an ill-famed archetype.

Instead, a reasoned selection of a development models should be contingent on the problems at hand, and that can be best achieved by defining work-units bottom-up with regard to the capabilities targeted by requirements:

ReksRings_Capabs.jpg
Work units should be defined with regard to the capabilities targeted by requirements

Development patterns could then be defined with regard to architecture layers (organization and business, systems functionalities, platforms implementations) and capabilities footprint:

  • Phased: work units are aligned with architecture capabilities, e.g : business objects (a), business logic (b), business processes (c), users interfaces (d).
  • Iterative: work units are set across capabilities and defined dynamically according to development problems.

EARdmap_VChain
A common development framework for phased and iterative projects

That would provide a development framework supporting the assessment of iterative as well as phased projects, paving the way for comprehensive and integrated impact analysis.

Value Chains & Architecture Capabilities

As far as software engineering is concerned, the issue is less the value chain itself than its change, namely how value is to be added along the chain.

Given the generalization of digitized and networked business environments, that can be best achieved by harnessing value chains to enterprise architecture capabilities, as can be demonstrated using the Caminao layout of the Zachman framework.

To summarize, the transition to this layout is carried out in two steps:

  1. Conceptually, Zachman’s original “Why” column is translated into a line running across column capabilities.
  2. Graphically, the five remaining columns are replaced by embedded pentagons, one for each architecture layer, with the new “Why” line set as an outer layer linking business value to architectures capabilities:

Mapping value chains to enterprise architectures capabilities

That apparently humdrum transformation entails a significant shift in focus and practicality:

  • The focus is put on organizational and business objectives, masking the ones associated to systems and platforms layers.
  • It makes room for differentiated granularity in the analysis of value, some items being anchored to specific capabilities, others involving cross dependencies.

Value chains can then be charted from business processes to supporting architectures, with software applications in between.

Impact Analysis

As pointed above, the crumbling of traditional fences and the integration of enterprise architectures into digital environments undermine the traditional distinction between primary and support activities.

To be sure, business drive is more than ever the defining factor for primary activities; and computing more than ever the archetype of supporting ones. But in between the once clear-cut distinctions are being blurred by a maze of digital exchanges.

In order to avoid a spaghetti heap of undistinguished connections, value chains are to be “colored” according to the nature of links:

  • Between architectures capabilities: business and organization (enterprise), systems functionalities, or platforms and technologies.
  • Between architecture layers: engineering processes.

Pentaheds-Traces
Tracing value chains across architectures layers

When set within that framework, value chains could be navigated in both directions:

  • For the assessment of applications developed iteratively: business value could be compared to development costs and architecture assets’ depreciation.
  • For the assessment of features (functional or non functional) to be shared across business applications: value chains will provide a principled basis for standard accounting schemes.

Combined with model based system engineering that could significantly enhance the integration of enterprise architecture into corporate governance.

Model Driven Architecture & Value Chains

Model driven architecture (MDA) can be seen as the main (only ?) documented example of model based systems engineering. Its taxonomy organizes models within three layers:

  • Computation independent models (CIMs) describe organization and business processes independently of the role played by supporting systems.
  • Platform independent models (PIMs) describe the functionalities supported by systems independently of their implementation.
  • Platform specific models (PSMs) describe systems components depending on platforms and technologies.

Engineering processes can then be phased along architecture layers (a), or carried out iteratively for each application (b).

When set across activities value chains could be engraved in CIMs and refined with PIMs and PSMs(a). Otherwise, i.e with business value neatly rooted in single business units, value chains could remain implicit along software development (b).

Further Reading

 External Links

 

Focus: Individuals in Models

Preamble

Models are meant to characterize categories of given (descriptive models), designed (prescriptive models), or hypothetical (predictive models) individuals.

Xian-Lintong-terracotta-soldiers f6.jpg
Identity Matter (Terracotta Soldiers of Emperor Qin Shi Huang)

Insofar as purposes can be kept apart, the discrepancies in targeted individuals can be ironed out, notably by using power-types. Otherwise, e.g if modeling concerns mix business analysis with software engineering, meta-models are introduced as jack-of-all-trades to deal with mixed semantics.

But meta-models generate exponential complexity when used across domains, not to mention the open and fuzzy ones of business intelligence. What is at stake can be better understood through the way individuals are identified and represented.

Partitions & Abstraction

Since models are meant to classify instances with regard to concerns, mixing concerns is to entail mixed classifications, horizontally across domains (e.g business and accounting), or vertically along engineering cycles (e.g business and engineering).

That can be achieved with power-types, meta-classes, or ontologies.

Delegation & Power-types

Given that categories (or classes or types) represent set of instances (given, designed, or simulated), they may by themselves be regarded as symbolic instances used to manage features commonly valued by their own members.

CaKe_indivs1
Power-types are simultaneously categories and instances.

This approach is consistent as well as effective providing the semantics and identities of instances are shared by all agents concerned, e.g business and technical aspects of car rentals. Yet it falls short on both accounts when abstractions levels a set across  domains, inducing connectors with different semantics and increased complexity.

Abstraction & Sub-classes

Sub-classes often appear as a way to overcome the difficulty, as illustrated by the Hepp Research’s Vehicle Sales Ontology: despite being set at different abstraction levels, instances for cars and models are defined and identified uniformly.

CaKe_indivs2

If, in that case, the model fulfills the substitution principle for external consistency, sub-types will fall short if engineering concerns were to be taken into account because the set of individual car models could then differ depending on perspective.

Meta-classes & Stereotypes

The overuse of meta-classes and stereotypes epitomizes the escapism school of modeling. That may be understandable, if not helpful, for the former which is by nature a free pass to abstraction; less so for the latter which is supposed to go the other way toward the specialization of meta-classes according to specific profiles. It ensues that stereotypes should never be used on their own or be extended by another stereotype.

As it happens, such consistency concerns appear to be easily diluted when stereotypes are jumbled with meta constructs; e.g:

  • Abstract stereotype (a).
  • Strong (aka class) inheritance of abstract stereotype from concrete meta-class (b).
  • Weak inheritance (aka aspect) between stereotypes (c, f).
  • Meta-constraint used to map Vehicle to methodology stereotype (d).
  • Domain specific connector to stereotype (e).

CaKe_uafp

Without comprehensive and consistent semantics for instances and abstractions, individuals cannot be solidly mapped to models:

  • Individual concepts (car, horse, boat, …) have no clear mooring.
  • Actual vehicles cannot be tied to the meta-class or the abstract stereotype.
  • There is no strong inheritance tying individuals models and rental cars to an identification mechanism.

Assuming that detachment is not an option, the basis of models must be reset.

Ontologies & open concepts

Put in simple words, ontologies are meant to examine the nature and categories of existence, in general (metaphysics) or in specific contexts. As for the latter, they can be applied to individuals according to the nature of their existence (aka epistemic identity): concepts, documents, categories and aspects.

CaKe_recap

As it happens, sorting things out makes the whole paraphernalia of meta-classes and stereotypes no longer needed because the semantics of inheritance and associations can be set according to the nature of individuals.

That can be illustrated with OWL 2 and the Caminao Ontological Kernel (CaKe_WIP):

  • Categories associated to business entities managed through symbolic counterparts (_#): rental cars and vehicle models
  • Partitions (_2): structural (Body style) or functional (Rental group).
  • Instances of actual objects (Rental car: Clio58642) and categories (Model:Clio, Body style: Sedan, Propulsion: combustion). 

With individuals solidly rooted in targeted domains, models can then fully serve their purposes.

What’s the Point

It’s worth to remind that models have to be built on purpose, which cannot be achieved without a clear understanding of context and targets. Here are some examples:

Quality arguably come first:

  • External consistency: to be checked by mapping individuals in analysis categories to big data.
  • Internal consistency: to be checked by mapping individuals in design classes to observed or simulated run-time components.
  • Acceptance: automated tests generation.

Then, individuals could be used to map models to past and future, the former for refactoring, the latter for business intelligence.

Refactoring looks to the past but is frustrated by undocumented legacy code. Combined with machine learning, individuals could help to bridge the gap between code and models.

Business Intelligence looks the other way as it is meant to map hypothetical business objects and behaviors to the structures and semantics already managed by information systems. As in reversal of legacy benefits, individuals could provide cues to new business meanings.

Finally, maturity assessment and optimization of enterprise processes fully depend on the reliability of their basis.

Further Reading

Beans must be Counted, one way And the other

Preamble

Conversations across software engineering forums sometimes reveal unexpected views, as it’s the case for the benefits of accountability.

Counting Paper Beans (Pieter Brueghel the Younger)

One would assume that competition would impel enterprises to scrutiny with regard to resources employed and product outcomes, pushing for the assessment of internal activities based on some agreed metrics. And yet, now and again, software development is viewed as a boutique occupation, if not an art pursuit, carried out by creative craftsmen for enlightened if demanding patrons; a vocation too distinctive to be gauged by common yardsticks.

Difficulties of Oversight

Setting apart creative delusions, the assessment of software development is effectively confronted with rational as well as practical obstacles.

To begin with rationality, and unlike traditional products, there is no market pricing mechanism that could match software development costs with customers’ value. As a consequence business stakeholders and systems engineers prefer to play safe and keep their respective assessments on the opposed banks of the customer/provider divide.

As for the practicality of assessments, the choice is between idiosyncratic approaches (e.g users’ points) and reasoned ones (essentially function points). The former ones being by nature specific and subject to changes in business opportunities, whereas the latter ones are being plagued by implementation plights that make them both costly and unreliable.

Yet, the diluting of IT systems in business environments is making that conundrum irrelevant: the fusing of business processes and supporting software is blanketing the discontinuities between business value and development costs.

Perils of Oversight

Given the digital integration between systems and business environments and the part played by software in production, marketing and operations, enterprises can no longer ignore the economics of software development.

As far as enterprises are concerned, economics use prices for two key purposes, external and internal.

With regard to their business environment, enterprises need metrics to price the resources they could buy and the products they could sell; their competitive edge fully depends on the thoroughness and accuracy of both.

With regard to their internal governance, enterprises need metrics to gauge the efficiency of their factors and the maturity of their processes, and allocate resources accordingly. That internal assessment is the basis of their versatility and plasticity:

  • Confronted to continuous, frequent, and often abrupt changes in business environments, enterprise must be able to adapt their activities without having to change its architectures. That cannot be achieved without timely and accurate assessments of the way their resources are put to use.
  • Conversely, enterprises may have to change their architectures without affecting their performances; that cannot be achieved without a comprehensive and accurate assessments of alternative options, organizational as well as technical.

To summarize, the spread and intricacy of software footprint over both sides of the crumbling fences between enterprise systems and business environments makes software economics a necessary component of enterprises governance, so a tally of software beans should not be an option.

Further Reading

Zebras cannot be saddled or harnessed

As far as standards go, the more they are, the less they’re worth.

nn
Read my code, if you can …

What have we got

Assuming that modeling languages are meant to build abstractions, one would expect their respective ladders converging somewhere up in some conceptual or meta cloud.

Assuming that standards are meant to introduce similarities into diversity, one would expect clear-cut taxonomies to be applied to artifacts designs.

Instead one will find bounty of committees, bloated specifications, and an open-minded if clumsy language confronted to a number of specific ones.

What is missing

Given the constitutive role of mathematical logic in computing systems, its quasi absence in modeling methods of their functional behavior is dumbfounding. Formal logic, set theory, semiotics, name it, every aspect of systems modeling can rely on a well established corpus of concepts and constructs. And yet, these scientific assets may be used in labs for research purposes but they remain overlooked for any practical use; as if the laser technology had been kept out of consumers markets for almost a century.

What should be done

The current state of affairs can be illustrated by a Horse vs Zebra metaphor: the former with a long and proved track record of varied, effective and practical usages, the latter with almost nothing to its credit except its archetypal idiosyncrasy.

Like horses, logic can be harnessed or saddled to serve a wide range of purposes without loosing anything of its universality. By contrast, concurrent standards and modeling languages can be likened to zebras: they may be of some use for their owner, but from an outward perspective, what remains is their distinctive stripes.

So the way out of the conundrum seems obvious: get rid of the stripes and put back the harness of logic on all the modeling horses.

What Can Be Done

Frameworks are meant to promote consensus and establish clear and well circumscribed common ground; but the whopping range of OMG’s profiles and frameworks doesn’t argue in favor of meta-models.

Ontologies by contrast are built according to the semantics of domains and concerns. So whereas meta-models have to mix lexical, syntactic, and semantic constructs, ontologies can be built on well delineated layers.

An ontological kernel has been developed as a Proof of Concept of the benefits of ontologies for enterprise architecture, the purpose being to extend the Caminao enterprise architecture paradigm to contexts and environments. That kernel has been built on two principles:

  • A clear-cut distinction between truth-preserving representation and domain specific semantics.
  • Profiled ontologies designed according to the nature of contents (concepts, documents, or artifacts), layers (environment, enterprise, systems, platforms), and contexts (institutional, professional, corporate, social.

A Protégé/OWL 2 implementation ensures portability and interoperability. A beta version is available for comments on the Stanford/Protégé portal with the link: Caminao Ontological Kernel (CaKe).

Further Readings

Agile Collaboration & Enterprise Creativity

Open-plan offices and social networks are often seen as significant factors of collaboration and innovation, breeding and nurturing the creativity of knowledge workers, weaving their ideas into webs of truths, and molding their minds into some collective intelligence.


Trust & Communication (Juan Munoz)

Yet, as creativity comes with agility, knowledge workflows should give brains enough breathing space lest they get more pressure than pasture.

Collaboration & Thinking Flows

Collaboration is a means to an end. To be of any use exchanges have to be fed with renewed ideas and assumptions, triggering arguments and adjustments, and opening new perspectives. If not they may burn themselves out with hollow considerations blurring clues and expectations, clogging the channels, and finally stemming the thinking flows.

Taking example from lean manufacturing, the first objective should be to streamline knowledge workflows as to eliminate swirling pools of squabbles, drain stagnant puddles of stale thoughts, and gear collaboration to flowing knowledge streams. As illustrated by flood irrigation, the first step is to identify basin levels.

Dunbar Numbers & Collaboration Basins

Studying the grooming habits of social primates, psychologist Robin Dunbar came to the conclusion that the size of social circles that individuals of a living species can maintain is set by the size of brain’s neocortex. Further studies have confirmed Dunbar’s findings, with the corresponding sizes for humans set around 10 for trusted personal groups and 150 for untried social ones. As it happens, and not by chance, those numbers seem to coincide with actual observations: the former for personal and direct collaboration, the latter for social and mediated collaboration.

Based on that understanding, the objective would be to organize knowledge workflows across two primary basins:

  • On-site and face-to-face collaboration with trusted co-workers. Corresponding interactions would be driven by personal dispositions and attitudes.
  • On-line and networked collaboration with workers, trusted or otherwise. Corresponding interactions would be based on shared interests and past exchanges.

Knowledge Workflows

The aim of knowledge workflows is to process data into information and put it to use. That is to be achieved by combining different kinds of tasks, in particular:

  • Data and information management: build the symbolic descriptions of contexts, concerns, and means.
  • Objectives management: based on a set of symbolic descriptions, identify and refine opportunities together with the ways to realize them.
  • Tasks management: allocate rights and responsibilities across organizations and collaboration frames, public and shallow or personal and deep.
  • Flows management: monitor and manage actual flows, publish arguments and propositions, consolidate decisions, …

Taking into account constraints and dependencies between the tasks, the aims would be to balance creativity and automation while eliminating superfluous intermediate products (like documents or models) or activities (e.g unfocused meetings).

With regard to dependencies, KM tasks are often intertwined and cannot be carried out sequentially; moreover, as illustrated by the impact of “creative accounting” on accounted activities, their overlapping is not frozen but subject to feedback, changes and adjustments.

With regard to automation, three groups are to be considered: the first requires only raw processing power and can be fully automated; the second also involves some intelligence that may be provided by smart systems; and the third calls for decision-making that can only be done by human agents entitled by the organization.

At first sight some lessons could be drawn from lean manufacturing, yet, since knowledge processes are not subject to hardware constraints, agile approaches should provide a more informative reference.

Iterative Knowledge Processing

A simple preliminary step is to check the applicability of agile principles by replacing “software” by “knowledge”. Assuming that ground is secured, the core undertaking is to consider what would become of cycles and iterations when applied to knowledge processing:

  • Cycle invariants: tasks would be iterated on given sets of symbolic descriptions applied to the state of affairs (contexts, concerns, and means).
  • Iterations content: based on those descriptions data would be processed into information, changes would be monitored, and possibilities explored.
  • Exit condition: cycles would complete with decisions committing changes in the state of affairs that would also entail adjustments or changes in symbolic descriptions.

That scheme meets three of the basic tenets of the agile paradigm, i.e open scope (unknowns cannot be set in advance), continuity of delivery (invariants are defined and managed by knowledge workers), and users in driving seats (through exit conditions). Yet it still doesn’t deal with creativity and the benefits of collaboration for knowledge workers.

Thinking Space & Pace

The scope of creativity in processes is neatly circumscribed by the nature of flows, i.e the possibility to insert knowledge during the processing: external for material flows (e.g in manufacturing), internal for symbolic flows (e.g in software engineering and knowledge processing).

Yet, whereas both software engineering and knowledge processes come with some built-in capability to redefined their symbolic flows on-the-fly, they don’t grant the same room to creativity. Contrary to software engineering projects which have to close their perspectives on the delivery of working products, knowledge processes are meant to keep them open to new understandings and opportunities. For the former creativity is the means to an end, for the latter it’s the end in itself, with collaboration as means.

Such opposite perspectives have direct consequences for two basic agile collaboration mechanisms: backlog and time-boxing:

  • Backlogs are used to structure and manage the space under exploration. But contrary to software processes whose space is focused and structured by users’ needs, knowledge processes are supposed to play on workers’ creativity to expand and redefine the range under consideration.
  • Time-boxes are used to synchronize tasks. But with creativity entering the fray, neither space granularity or thinking pace can be set in advance and coerced into single-sized boxes. In that case individuals must remain in full control of the contents and stride of their thinking streams.

It ensues that when creativity is the primary success factor standard agile collaboration mechanisms are falling short and intelligent collaboration schemes are to be introduced.

Creativity & Collaboration Tiers

The synchronization of creative activities has to deal with conflicting objectives:

  • On one hand the mental maps of knowledge workers and the stream of their thoughts have to be dynamically aligned.
  • On the other hand unsolicited face-to-face interactions or instant communications may significantly impair the course of creative thinking.

When activities, e.g software engineering, can be streamlined towards the delivery of clearly defined outcomes, backlogs and time-boxes can be used to harness workers’ creativity. When that’s not the case more sophisticated collaboration mechanisms are needed.

Assuming that mediated collaboration has a limited impact on thinking creativity (emails don’t have to be answered, or even presented, instantly), the objective is to steer knowledge workflows across a two-tiered collaboration framework: one personal and direct between knowledge workers, the other social and mediated through enterprise or institutional networks.

On the first tier knowledge workers would manage their thinking flows (content and tempo) independently, initiating or accepting personal collaboration (either through physical contact or some kind of instant messaging) depending on their respective “state of mind”.

The second tier would be for social collaboration and would be expected to replace backlogs and time-boxing. Proceeding from the first to the second tier would be conditioned by workers’ needs and expectations, triggered on their own initiative or following prompts.

From Personal to Collective Thinking

The challenging issue is obviously to define and implement the mechanisms governing the exchanges between collaboration tiers, e.g:

  • How to keep tabs on topics and contents to be safeguarded.
  • How to mediate (i.e filter and time) the solicitations and contribution issued by the social tier.
  • How to assess the solicitations and contribution issued by individuals.
  • How to assess and manage knowledge deemed to remain proprietary.
  • How to identify and manage knowledge workers personal and social circles.

Whereas such issues are customary tackled by various AI systems (knowledge management, decision-making, multi-players games, etc), taken as a whole they bring up the question of the relationship between personal and collective thinking, and as a corollary, the role of organization in nurturing corporate innovation.

Conclusion: Collaboration Spaces vs Panopticon

As illustrated by the rising of futuristic headquarters, leading technology firms have been trying to tackle these issues by redefining internal architecture as collaboration spaces. Compared to traditional open spaces, such approaches try to fuse physical and digital spaces into overlapping layers of collaboration spaces, using artificial intelligence to harness cooperation.

Yet, lest uniform and comprehensive transparency brings the worrying shadow of a panopticon within which everyone can be unknowingly observed, working spaces have to be designed as to enhance collaboration without trespassing on privacy.

That could be achieved with a layered transparency set along the nature of collaboration:

  • Immediate and personal: working cells regrouping 5 to 10 workstations earmarked for a task and used indifferently by teams members.
  • Delayed and personal: open physical spaces accommodating working cells, with instant messaging and geo-localization; spaces are hinged on domains and focused on shared knowledge.
  • On-line and networked: digital spaces merging physical spaces and organizational structures.

That mix of physical and virtual spaces could be dynamically redefined depending on activities, projects, location, and organisation.

Further Readings

External Links

Quality Circles

Generally speaking, quality may refer to intrinsic properties, functional characteristics, or some external yardstick. With regard to software engineering it would mean code, users experience, and operations, each with its own specific stakeholders and criteria.

Martine Franck ladef
Circling Quality Metrics (Martine Franck)

On one side, traditional phased approaches to QA are meant to deal with those different aspects, yet they fall short when those facets are weaved together across enterprise architectures and business environments. On the other side agile quality solutions may also fail to cope with transverse business functions shared across architectures. Hence the need of a bird’s-eye view putting quality into a broader enterprise perspective.

Who Cares for Quality

Whatever the attributes considered, quality should clearly encompass actual products as well as their uses. For that purpose quality has to be assessed with regard to the requirements as expressed by business stakeholders, users, or systems engineers and administrators. Given the constraints and specificity of changing environments, objective yardsticks are of limited use and quality is often to be assessed for the lack thereof:

  • Business requirements: the product doesn’t meet expectations with regard to business contents (objects and logic).
  • Functional requirements: while the product meets business requirements, the part played by supporting systems doesn’t meet users’ expectations.
  • Quality of service: while the product meets business and functional requirements, users’ experience doesn’t meet expectations.
  • Technical requirements: while the product meets users’ expectations (business, functional, and ease of use), there are problems with deployment, maintenance, or operations.

cc
Quality is best defined with regard to requirements and checked with regard to architectures

Crossing those concerns, quality assessment has to deal with two primary challenges:

  • Since assessment at each level can be conditioned by lower levels, outcomes must be described and traced accordingly. That is to be the role of quality management.
  • Since assessment has to cover both products and their use during their shelf life, uncertainty will have to be taken into account. That is to be the role of quality assurance.

A third aspect can be added for externalities, i.e factors whose impact cannot be clearly or uniquely attributed: external risks are not under control, ergonomy cannot be accurately measured, and the assessment of ROI for processes improvement remains a matter of insight.

Quality Management & Documentation

The primary objective of quality management is to identify, define, and track the targeted outcomes and the factors deemed to affect their characteristics: contracts, products traceability, models reuse, tests, etc.

Depending on target and development model, management footprint can be defined at three levels of detail:

  • With regard to the use of products in their operational context, the focus is to be on deployed systems compared to textual specifications (a).
  • With regard to the intrinsic properties of deliverables, the focus is to be extended to software components (b).
  • When products are to be deployed in different environments, or to be maintained or modified along time, additional documentation will be necessary to trace changes to functional (c) and enterprise (d) architectures.

Assessment at each level can be conditioned by lower levels
Assessment at each level may be conditioned by lower levels

In any case (i.e with or without intermediate documentation,) traceability is to be a corner-stone of quality management:

  • Business processes with regard to business objectives, e.g how to assess insurance premiums or compute missile trajectory.
  • Code with regard to textual requirements.
  • System functionalities with regard to business processes. Use cases are widely used to describe how systems are to support business processes, and system functionalities are combined to realize use cases.
  • System components as technical implementations of functionalities targeted to different users, locations, and configurations.

And another dimension of traceability is required when quality assurance has to deal with uncertainty, risks, and decision-making.

From Management to Assurance

The objective of quality assurance is to define, carry on, and monitor operations in order to improve the characteristics concerned and reduce the probability that something will go amiss during the planned shelf life of products.

For that purpose assurance footprint and granularity must be aligned with the layers defined by quality management:

  • Integration and acceptance tests are carried out in reference to requirements on the assumption that software components have been validated.
  • Code checking and unit tests are carried out in reference to business and functional requirements on the assumption that their consistency has been checked.
  • External consistency is checked with regard to business requirements independently of functional or technical ones.
  • Internal consistency is checked with regard to functional requirements on the assumption that the business requirements (external) consistency has been checked.

Footprint & granularity of management and assurance must be congruent
Footprint & granularity of management and assurance must be congruent

Those operations, meant to deal with the quality of each layers, have to be combined with schemes of secure transformations between layers, e.g reuse, patterns or code generation. That would put quality on a sound basis were it not for externalities.

Quality Assurance & Risk Management

As already noted, QA has to take into account uncertainties and risks both external (business or technical environments) and internal (development processes). Assuming quality assurance has to include risk assessment, policies should be driven by risk acceptance levels:

  • No risk: quality assurance can be designed as to eliminate some uncertainties (e.g reuse and code generation).
  • No risk taken: whereas business and technology options are not sure bets some must be carried out regardless of what happens in the environment (e.g unexpected regulatory change or delay in critical technology). In that case QA must provide fallback solutions.
  • Managed risks: some defaults or delays can be priced and weighted by likelihood. In that case QA should monitor the risks and balance their cost (e.g resources consumption, late delivery) against the cost of preventive (e.g more systematic checks on consistency, additional staff) or corrective (e.g tests or maintenance) measures.

Quality management should be set at the nexus between risks management and quality assurance.
Quality management should be set at the nexus between risks management and quality assurance.

That will put quality management at the nexus between regulatory compliancerisks management and quality assurance.

Further Readings

Data Mining & Requirements Analysis

Preamble

Data mining explores business opportunities and competitive advantage, requirements analysis considers supporting applications. Both use models, the former’s are predictive and ephemeral, the latter’s descriptive (or prescriptive) and perennial.

(Andreas Gursky)
Data mining: sorting business wheat from world chaff (Andreas Gursky)

As the generalization of digitized environment calls for more integration of business and software engineering processes, understanding the relationship between data mining and requirements analysis could significantly improve processes maturity and agility.

Data vs Requirements Analysis

Nowadays the success of a wide range of enterprises critically depends on two achievements:

  1. Mapping business models to changing environments by sorting through facts, capturing the relevant data, and processing the whole into meaningful and up to date information. That can be achieved through analysis models mapping business expectations to supporting systems.
  2. Putting that information into effective use through business processes and supporting systems. That is done through systems architecture and design models meant to prescribe how to build software artifacts.

Those challenges are converging: under the pressure of markets forces and technological advances most of traditional fences between business channels and IT systems are crumbling, putting the focus on the functional integration between data mining and production systems. That’s where predictive models can help by anchoring descriptive models to moving markets and by cross-feeding analysis and operations. How that can be achieved has been the bread and butter of good corporate governance for some time, but there has been less interest for the third branch, namely how data analysis (predictive models) could “inform” business requirements (descriptive models).

From Data to Information

Facts are not given but must be captured through a symbolic description of actual observations. That entails some observer set on task using a mix of conceptual and technical apparatus. Data mining and requirements analysis are practical realizations of that process:

  • Data mining relies on analytic tools to extract revealing information that could be used to chart opportunities along business models.
  • Requirements analysis relies on business processes and users’ practice to extract symbolic descriptions that will be used to build models of supporting applications.

If both walk the path from data to information, their objectives are different: the former’s is to improve business decisions by making sense of actual observations; the latter’s is to build system surrogates from the symbolic descriptions of actual business objects and activities.

Anchors & Structures: Plasticity of  Business Entities

Perhaps paradoxically, business agility calls for terra firma because nimble trades must be rooted in corporate identity and business continuity. As a consequence, the first step of requirements analysis should be to associate individuals business objects or activities with stable and consistent identification mechanisms, and to group them with regard to that mechanism:

  • External entities with natural (person) or designed identity (car).
  • Symbolic entities for roles (customer) or commitments (maintenance contract).
  • Actual activities (promotion campaign) and events (sale) or business logic (promotion).

Anchors
Anchors

Conversely, as the aim of data analysis is to explore every business angle, individual observations are supposed to be moved across groups; yet, since the units identified by data analysis will have to be aligned with the ones described by requirements analysis, moves must also keep track of identities. That dilemma between continuity of identified structures on one side, plasticity of functional aspects on the other side, can be illustrated by banks which, in response to marketing requirements, had to shift from account (internal identification) to customer (external identification) based systems.

From account (left) to customer (right) centered systems
It’s easier to market insurance from customer centered systems (right) than from account centered ones (left)

That challenge can be overcome by linking the identification of symbolic entities to external anchors.

Profiles & Features: Versatility of Business Opportunities

As noted above, requirements and data analysis are set on the same road but driven by different forces: the former tries to group individuals with regard to identification mechanisms before fleshing them out with relevant features; the latter tries to group individuals with given identities according to features and opportunity profiles. Yet, what could appear as collision courses may become a meeting of minds if both courses are charted with regard to variants analysis.

From the requirements perspective the primary concern is to distinguish between structural and functional variants:

  • Structural variants are bound to identities, i.e set up-front for the respective life-cycle of individual business objects or transactions. As a consequence they cannot be changed without undermining business continuity. Moreover, being part and parcel of descriptors (e.g  types and use cases) their change will affect engineering processes.
  • Functional variants may vary during the respective life-cycle of individual business objects or transactions. As a consequence they can be changed without undermining business continuity, and changes in descriptors (e.g partitions and scenarii) can be managed without affecting engineering processes.

From the data mining perspective the objective is to improve the benefits of information systems for decision-making processes:

  • Static: how to classify individuals as to reduce the uncertainty of predictions
  • Dynamic: how to classify business options as to reduce the uncertainty of decisions.

Since those objectives are set for individuals, constraints on continuity and consistency can be dealt with independently of the description of symbolic surrogates.

Identified individuals with profiles for customers (a), their behaviors (b), and conciliatory gestures (c)
Identified individuals with profiles for customers (a), their behaviors (b), and promotional gestures (c)

It ensues that perspectives can be adjusted by factoring out the constraints of continuity and consistency for business objects (e.g cars), agents (e.g customer) and processes (e.g repair). Profiles for agents (a), behaviors (b), and business options (c) could then be freely explored and tailored with regard to changes in business environment and objectives.

Applying Data Analysis to Requirements

Not surprisingly data analysis techniques can be used to adjust perspectives. For that purpose a sample of individuals (business objects and operations) representing the population targeted by requirements would have to be submitted to basic mining routines. Borrowing a catalog from F. Provost & T. Fawcett:

  1. Classification: estimates the probability for each individual (objects or operations) to belong to a set of classes; can be used to assess the closeness of the variants (respectively power-types or execution paths) identified by requirements analysis.
  2. Regression: reverse classification; estimates how much of individual features valuations can be explained by the proposed classifications.
  3. Similarity: a shallow version of classification; can be used to assess the distance between variants and consolidate the proposed classifications.
  4. Clustering: a deep version of classification; can be used to distinguish between shallow and natural classifications.
  5. Co-occurrence: deals with behavioral variants; can be used to distinguish between functional and structural classifications.
  6. Profiling: reverse of co-occurrence; can be used to consolidate functional and structural classifications.
  7. Links prediction: can be used to define relationships.
  8. Data reduction: eliminate redundant individuals; can be used to consolidate requirements and refine tests scenarii.
  9. Causal modeling: brings together business logic (events and rules) and users decisions; should provide the backbone of tests scenarii.

Besides the direct benefits for requirements, such procedures may help to bridge the span between data and requirements analysis and significantly improve processes’ capability and maturity level.

Business Objectives & Enterprise Architecture Capabilities

Data mining being first and foremost about competitive edge, it relies on a timely and effective coupling between enterprises capabilities and business opportunities. But the dilemma between continuity and plasticity described above for business objects and processes reappears at enterprise level: how to conciliate architecture, by nature perennial, with the agility needed to make the best of changing and competitive environments ?

As architectural big bang is arguably a last resort option, answers to that question must be progressive and local: if changes are to be swift and pertinent they must be both circumscribed and leveraged to the relevant parts of architecture. Taking an (amended) leaf of the Zachman framework, its sixth column (“Why” ) could be reset as a line for business and operational objectives that would cross the original five columns instead of the architecture layers. Using a pentagonal representation of enterprise architecture, that line would be set as circling the outer range.

Enterprise Architecture and the loci of change

It is worth to note that setting objectives on a line crossing the columns of capabilities instead of a column crossing the lines of layers means that objectives are set at enterprise level and their cascading impact traced and managed through layers.

Conceptual Models & Business Contexts

But even that updated framework doesn’t take into account the fundamental changes in business environments. Once secure behind organizational and technical fences, enterprises must now navigate through open digitized business environments and markets. For business processes it means a seamless integration with supporting applications; for corporate governance it means keeping track of heterogeneous and changing business contexts and concerns while assessing the capability of organizations and systems to cope, adjust, and improve.

As long as environments were a hotchpotch of actual and symbolic artifacts the pros and cons of integration could be balanced. But the generalization of digital flows and transactions has upended the balance: there is no more room or time for latency and enterprises must bring all symbolic representations (business, organization, and systems) under a common conceptual roof:

OpenConcepts_00
Conceptual models as bridges between business processes, and systems.

A canonical approach would be to introduce a conceptual indexing scheme open to extensions but with its footprint defined by business processes and systems functionalities. That would ensure a better integration of processes and supporting engineering but will do nothing for the modeling gap between enterprise architecture and external contexts. That could be achieved with ontologies.

Conceptual Loop: Ontologies & Business Intelligence

As far as data mining is concerned, three kinds of operations are to be considered:

  • Data understanding gives form and semantics to raw material.
  • Business understanding charts business contexts and concerns in terms of objects and processes descriptions.
  • Modeling consolidate data and business understanding into descriptive, predictive, and operational models.

OKBI_dmProcess
The aim of data mining is to refine raw data into meaningful information

While traditional approaches fall short when tasked with iterative modeling of unstructured data, ontologies may fare better because their explicit aim is only to describe what could exist in a domain of discourse:

  1. They are made of categories of things, beings, or phenomena; as such they may range from simple catalogs to philosophical doctrines.
  2. They are driven by cognitive (i.e non empirical) purposes, namely the validity and consistency of symbolic representations.
  3. They are meant to be directed at specific domains of concerns, whatever they can be: politics, religion, business, astrology, etc.

With regard to models, only the second point puts ontologies apart: contrary to models, ontologies are about understanding and are not supposed to be driven by empirical purposes. It ensues that ontologies can be understood as conceptual (aka canonical) models, used as such for business analysis and extended with purposes for systems analysis and design.

In addition to the integration with enterprise architectures, ontologies benefits for business intelligence would be twofold.

On one side, and whatever their use, ontologies could be aligned with the nature of contexts and their impact on business and enterprise governance e.g:

  • Institutional: mandatory semantics sanctioned by regulatory authority, steady, changes subject to established procedures.
  • Professional: agreed upon semantics between parties, steady, changes subject to established procedures.
  • Corporate: enterprise defined semantics, changes subject to internal decision-making.
  • Social: definedpragmatic semantics, no authority, volatile, continuous and informal changes.
  • Personal: customary semantics defined by named individuals.

OKnow_TabOntos
Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).

On the other side ontologies could be defined according to the nature of targeted items, namely terms, documents, symbolic representations, or actual objects and phenomena. That would outline four basic concerns that may or may not be combined:

  • Thesaurus: ontologies covering terms and concepts.
  • Document Management: ontologies covering documents with regard to topics.
  • Organization and Business: ontologies pertaining to enterprise organization, objects and activities.
  • Engineering: ontologies pertaining to the symbolic representation of products and services.

KM_OntosCapabs
Ontologies: Purposes & Targets

On a broader perspective, ontologies could be influential in spanning the gap between explicit and implicit knowledge. In a few years’ time practically unlimited access to raw data and the exponential growth in computing power have opened the door to massive sources of unexplored knowledge which is paradoxically both directly relevant yet devoid of immediate meaning:

  • Relevance: mined raw data is supposed to reflect the geology and dynamics of targeted markets.
  • Meaning: the main value of that knowledge rests on its implicit nature; applying existing semantics would add little to existing knowledge.

Assuming that deep learning can transmute raw base metals into knowledge gold, ontologies would be decisive in framing the understanding, assessment, and improvement of the processes.

Operational Loop: Business Intelligence & Decision-making

Once carried out separately and periodically, decision-making is to be carried out iteratively at operational, tactical, and strategic level; while each level is to be set along its own time-frames, all are to rely on data-mining, with cycles following the same pattern:

  1. Observation: understanding of changes in business opportunities.
  2. Orientation: assessment of the reliability and shelf-life of pertaining information with regard to current positions and operations.
  3. Decision: weighting of options with regard to enterprise capabilities and broader objectives.
  4. Action: carrying out of decisions within the relevant time-frame.

Given business new playground, decision-making processes have to weave together material and digitized flows, actual contexts (aka territories) and symbolic descriptions (maps), and overlapping time-frames (operational tactical, strategic). That operational loop could then be coupled with the broader one of business intelligence:

OKBI_BIDM
Integration of  operational analytics, business intelligence, and decision-making.

Selected Readings