Engineering is all about purposeful and effective design, Art will know nothing of that. Yet, software being made of symbolic constructs, creativity is undoubtedly a core constituent of its design, and beauty a sure sign of its quality.
Engineering is about the making of artifacts, software engineering about the making of symbolic ones. Assuming (for the sake of the argument…) a complete and consistent set of requirements with a given lifespan, engineering processes mix reuse of existing artifacts or procedures with the creation of new ones, making room for innovation and creativity.
With regard to software engineering the cases and economics of reuse can be judged according reasoned arguments, but what about innovation ?
Innovation vs Invention
As far as engineering is concerned, innovation is bounded by the achievements of science and technology:
Science builds symbolic representations (aka models) of the world as well as the artifacts needed to check their validity.
Technology considers how to build artifacts, eventually (but not necessarily) based upon scientific models.
Engineering considers how to efficiently build artifacts under given economic conditions.
Engineering of artifacts usually encompasses two basic phases, design and production, the outcome of the former being a set of descriptions, the outcome of the latter batches of products. In that context, innovation may apply to new descriptions (e.g the design of a new weapon or a new recipe), or new modus operandi (e.g the sequencing of tasks).
As illustrated by cooking recipes, designs and modus operandi (MO) are not necessarily supported by symbolic descriptions like blueprints or flowcharts; as it happens, that difference may mark the boundary between craft and engineering, the former based on individual skills and personal transmission, the latter a collective endeavor supported by disembodied documentation.
That distinction may also be used to distinguish between innovation and invention, the former possibly (but not necessarily) a matter of craft passed on through imitation, the latter a collective achievement necessarily supported by symbolic descriptions.
Circle, Wheel, Loops
Invention is not easily defined, as demonstrated by the different ways intellectual property rights, and especially patents, are managed across countries. To begin with, two principles are widely agreed upon:
Patents are formal descriptions and therefore belong to the realm of symbolic representations.
Patents are not rights to use what is described but rights to exclude others from using those descriptions.
Then, policies differ about patentable subject matter, especially with regard to tangible artifacts (a) as opposed to intangibles like business (b) and computation (c) methods. In other words consensus break down when symbolic descriptions without actual counterparts are considered, which is the case for software.
Even if engineering is circumscribed to the making of artifacts, and business methods are excluded, the problem remains for computation methods when incorporated as software artifacts. In that case symbolic descriptions and actual outcomes are merged into source programs and the distinction between design and production is rubbed out. As a corollary, invention, which was previously associated to the design of actual artifacts, is now deprived of actual context and can only be assessed within its symbolic realm.
But dematerialized inventions are not patent matter, as can be seen with loops: on one hand, with trustworthy predefined language constructs widely available, it doesn’t make sense to “invent” new ways to program iterations; on the other hand patenting those constructs would force designers into squaring circles. More generally, patenting business or computation iterations as inventions would cast a shadow on an uncountable set of activities.
The answer to this conundrum is the distinction between invention and problem solving, the former to be dealt with by engineering, the latter by design.
Engineering & Design
As far as software engineering is concerned, creativity may target two kinds of problems: enterprise organization and applications development.
Given business processes set by objectives and business rules, the problem is to describe how they will be supported by systems functionalities. Since solutions include the definition of features and use of actual devices they can be seen as patentable subject matter.
That’s not the case when systems functionalities are given and the problem is to design the system components that will support them. Solutions will only deal with the design of software component without affecting external features and use of actual ones.
In both cases, creativity can play in two directions: finding specific solutions to specific problems within the given architectural context; or improving functional or technical architectures. While specific solutions, supposedly not reusable, cannot be considered as inventions, that’s not the case for innovations targeting organizations or systems architectures. Yet, since architectural constructs can also be regarded as solutions to problems, innovations might therefore target architectures capabilities as well as their use.
At enterprise level problems are set by business contexts and objectives, and solutions based upon enterprise capabilities: who, what, how, where, when. Innovation can only be assessed in terms of business value.
At system level problems are set by organization and solutions supported by systems functionalities: access, persistency, computation, communication, synchronization, etc. New applications can be supported by existing functional architecture or entail innovations which should be assessed in terms of functional metrics and Quality of Service.
At component level problems are set by functional requirements and solutions implemented by components design: server, boundaries, storage, middle ware, embedded units, etc. New developments can be realized using existing components, or entail changes in technical architecture which should be assessed in terms of total cost of acquisition and ownership.
At each level, creativity must remain a balancing act between architectural constraints and innovative solutions.
As long as information was just data files and systems just computers, the respective roles of enterprise and IT architectures could be overlooked; but that has become a hazardous neglect with distributed systems pervading every corner of enterprises, monitoring every motion, and sending axons and dendrites all around to colonize environments.
Yet, the overlapping of enterprise and systems footprints doesn’t mean they should be merged, as a matter of fact, that’s the opposite. When the divide between business and technology concerns was clearly set, casual governance was of no consequence; now that turfs are more and more entwined, dedicated policies are required lest decisions be blurred and entropy escalates as a consequence. The need of better focus is best illustrated by the sundry meanings given to “system”, from computers running software to enterprises running businesses, and even school of thought.
Concerns in Perspectives
As far as enterprises are concerned, systems combine human beings, devices, and software components.
From a functional perspective, their capabilities are best defined by their interactions with their environment as well as between their constituents:
Users are supposed to be actual agents granted with organizational status and responsibilities, and possessing symbolic communication capabilities.
Software components and actual devices cannot be granted with organizational status or responsibilities; the former come with symbolic communication capabilities, the latter without.
Assuming that interactions are governed by information, the objective is to understand the contribution of each type of component with regard to information processing and decision-making.
From an engineering perspective, the building of systems is all too often reduced to the development of their software constituents. As a consequence, the complexity of the forest is masked by the singularity of the trees:
The business value of applications is assessed locally instead of being driven by enterprise objectives and organizational constructs.
System models are confused with the programs used to produce software components.
Since engineering agenda are supposed to support business objectives, their decision-making processes must be aligned yet managed independently. That reasoning also applies to services management whose role is to adjust resources and software releases to operational needs.
Enterprise architectures can then be described as a cross between architecture assets (business objects, organization, technical architecture) on one hand, core processes for business, engineering and services management on the other hand.
Information is to provide the glue between architecture assets and supported processes.
Information and Architectures Levels
As understood by Cybernetics (see Stafford Beer, “Diagnosing the System for Organizations“), enterprises are viable systems whose success depends on their capacity to countermand entropy, i.e the progressive downgrading of the information used to govern interactions between systems and their environment. And that put knowledge management at the core of systems capabilities.
Knowledge is best defined as information put to use, with information obtained by adding references and sense to data. That blueprint is supposed to be repeated at all architecture levels:
At enterprise level facts pertaining to the conduct of business are captured from environments before being organized into information meant to support enterprise governance.
System level deals with symbolic descriptions of functionalities (information). At this level data is irrelevant, the objective being the consolidation and reuse of shared representations and patterns (knowledge).
The technical level is in charge of operational governance: software components, platforms capabilities, technical resources, communication mechanisms, etc. On one hand contexts are to be monitored and data translated into information; one the other hand operational decisions have to be made (knowledge) and executed which means information translated into data.
If architecture levels can be characterized by processing capabilities, their engineering must be aligned to the corresponding objectives and time frames.
System Engineering and Separation of Concerns
As demonstrated time and again by blame games around projects failures, enterprise and technical concerns are poor engineering bedfellows, and with good reasons: different contexts, concerns, skills, and time scales. Faced with the challenge of bringing enterprise and development perspectives under a single governance, engineering approaches generally follow one of two basic options:
Phased projects (P) give precedence to software development, with requirements supposed to be set at inception and acceptance performed at completion.
Agile projects (A) give precedence to business requirements, with development iterations combining specifications, programming, and acceptance.
While each approach has its merits, agile for complex but well circumscribed projects, phased for large projects with external organizational dependencies, the difficulties each may encounter point to the crux of the matter, namely the separation of concerns between business goals and information technology, bypassed by agile approaches and misplaced by phased ones:
Agile development models are driven by users’ value and based on the implicit assumption that business and technology concerns can be dealt with continuously and simultaneously. That may be difficult when external dependencies cannot be avoided and shared ownership cannot be guaranteed.
Phased development models take a mirror position by assuming that business concerns can be settled upfront, which is clearly a very hazardous policy.
Those pitfalls may be overcome if engineering processes take into account the distinction between knowledge management on one hand, software development on the other hand.
Knowledge management (KM) encompasses all information pertaining to the conduct of business: environment, markets, objectives, organization, and projects. With regard to engineering projects, its role is to define and consolidate the descriptions of symbolic representations to be supported by information systems.
Software development (SD) starts with symbolic descriptions and proceeds with the definition, building and acceptance of the corresponding software artifacts.
Service management (SM) provides the bridge between engineering and operational processes.
Capture of information from data and legacy code can also be achieved respectively by data mining and reverse engineering.
Depending on organizational or technical dependencies, knowledge management and software development will be carried out within integrated development cycles (agile processes), or will have to be phased in order to consolidate the different concerns.
That engineering distinction neatly coincides with the functional divide of architectures: knowledge management supporting enterprise architecture, software development supporting system architecture.
Architectures and Engineering Processes
Business processes are governed by collective representations mixing goals, models and rules, not necessarily formally defined, and subject to change with opportunities. Engineering processes for their part have to be materialized through work units, models, and products, all of them explicitly defined, with limited room for change, and set along constraining schedules. Given that systems’ fate hangs on the hing between business and engineering concerns, the corresponding perspectives must be properly aligned.
That can be achieved if workshops are associated with architecture levels, and work units defined according model layers and development flows:
Knowledge management takes charge of symbolic descriptions pertaining to enterprise concerns. Its objective is to map business and operational requirements with the functionalities of supporting systems. Expressed in MDA parlance, the former would be described by computation independent models (CIMs), the latter by platform independent models (PIMs).
Software development deals with software artifacts. That encompasses the consolidation of symbolic descriptions into functional architectures, the design of software components according to platforms specificity (PSMs), and the production of code according to deployment targets (DPMs).
IT Service Management is the counterpart of knowledge management for actual operations and resources. Its objective is to synchronize business and development time-frames and align operational requirements with releases and resources.
That congruence between architecture divides (enterprise, systems, technology), models layers (CIMs, PIMs, PSMs, DPMs), and engineering concerns (facts or legacy, information, knowledge), provides a reasoned and comprehensive framework for enterprise architectures.
Architecture Capabilities & Separation of Concerns
Architectures describe stocks of shared assets, processes describe flows of changes. Given a hierarchized description of architectures, the objective is to ensure the traceability of concerns and decisions across levels and processes.
Who: enterprise roles, system users, platform entry points.
What: business objects, symbolic representations, objects implementation.
How: business logic, system applications, software components.
When: processes synchronization, communication architecture, communication mechanisms.
Where: business sites, systems locations, platform resources.
The rationale of objectives and decisions (the “Why” of Zachman framework) is expressed by dependency links according to the nature of primary factors:
Deontic dependencies are set by external factors, e.g regulatory context, communication technologies, or legacy systems.
Alethic (aka modal) dependencies are set by internal policies, based on the assessment of options regarding objectives as well as solutions.
This distinction between dependencies is critical for decision-making and consequently for enterprise governance. Set within time-frames and decorated with time related features, those dependencies can then be consolidated into differentiated strategies for business, engineering and operational processes.
Given that dependencies are usually interwoven, governance must be aligned with the footprints of associated decisions:
Assets: shared decisions affecting different business processes (organization), applications (services), or platforms (purchased software packages).
Users’ Value: streamlined decisions governed by well identified business units providing for straight dependencies from enterprise (business requirements), to systems (functional requirements) and platforms (users’ entry points).
Non functional: shared decisions about scale and performances affecting users’ experience (organization), engineering (technical requirements), or resources (operational requirements).
That classification can be used to choose a development model:
Agile approaches should be the option of choice for requirements neatly anchored to users’ value.
Phased approaches should be preferred for projects targeting shared assets across architecture levels. When shared assets are within the same level epic-like variants of agile may also be considered.
Architectures, Processes, and Governance
While there is some consensus about the scope and concerns of Enterprise Architecture as a discipline, some debate remains about the relationship between enterprise and IT governance.
Beyond turf quarrels, arguments are essentially rooted in the distinction between information and supporting systems or, more generally, between organization and IT.
To some extent, those arguments can be ironed out if governance were set with regard to scope, actual or symbolic:
Actual scope deals with current or planned business objects, assets, and processes.
Symbolic scope deals with the design of the corresponding software components.
That distinction matches the divide between enterprise and systems architectures: one set of models deals with enterprise objectives, assets, and organization, the other one deals with system components.
With regard to processes, governance should distinguish between business and engineering:
Business processes are clearly designed at enterprise level, both for their organization and the symbolic description of system representations.
Software engineering ones are under the responsibility of IT governance, from system and software architecture to release and deployment.
While governance could be shared, there is no reason to assume immediate and continuous alignment of perspectives; that could only be achieved through architecture perspective:
Actual architectures encompass business organization (locations, agents, devices) and systems deployments.
Symbolic architectures include systems functionalities and software development.
And, as a mirror achievement, processes take charge of transitions between actual and symbolic architectures:
Symbolic architectures provide the mediation between business requirements and software development (As).
Physical architectures do the same between software development and services management (Ap).
Business processes take responsibility for the mapping of enterprise architecture into symbolic representations (Pb).
Engineering processes do the same for the alignment of software and deployment architectures (Pe).
Enterprise and IT governance can then be defined in terms of Knowledge management, with engineering and business processes gathering data from their respective realms, sharing the information through architectures, and putting it to use according their respective concerns.
As illustrated by he Symbolic Systems Program (SSP) at Stanford University, advances in computing and communication technologies bring information and knowledge systems under a single functional roof, namely the processing of symbolic representations.
Within that understanding one will expect Knowledge Management to shadow systems architectures and concerns: business contexts and objectives, enterprise organization and operations, systems functionalities and technologies. On the other hand, knowledge being by nature a shared resource of reusable assets, its organization should support the needs of its different users independently of the origin and nature of information. Knowledge Management should therefore bind knowledge of architectures with architecture of knowledge.
In their pivotal article Davis, Shrobe, and Szolovits set five principles for knowledge representation:
Surrogate: KR provides a symbolic counterpart of actual objects, events and relationships.
Ontological commitments: a KR is a set of statements about the categories of things that may exist in the domain under consideration.
Fragmentary theory of intelligent reasoning: a KR is a model of what the things can do or can be done with.
Medium for efficient computation: making knowledge understandable by computers is a necessary step for any learning curve.
Medium for human expression: one the KR prerequisite is to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.
That puts information systems as a special case of knowledge ones, as they fulfill the five principles, yet with a functional qualification:
The only difference is about coupling: contrary to knowledge systems, information and control ones play a role in their context, and operations on surrogates are not neutral.
Knowledge constructs are empty boxes that must be properly filled with facts. But facts are not given but must be observed, which necessarily entails some observer, set on task if not with vested interests, and some apparatus, natural or made on purpose. And if they are to be recorded, even “pure” facts observed through the naked eyes of innocent children will have to be translated into some symbolic representation. Taking wind as an example, wind socks support immediate observation of facts, free of any symbolic meaning. In order to make sense of their behaviors, wanes and anemometers are necessary, respectively for azimuth and speed; but that also requires symbolic frameworks for directions and metrics. Finally, knowledge about the risks of strong winds can be added when such risks must be considered.
As far as enterprises are concerned, knowledge boxes are to be filled with facts about their business context and processes, organization and applications, and technical platforms. Some of them will be produced internally, others obtained from external sources, but all should be managed independently of specific purposes. Whatever their nature (business, organization or systems), information produced by the enterprises themselves is, from inception, ready to use, i.e organized around identified objects or processes, with defined structures and semantics. That’s not necessarily the case with data reflecting external contexts (markets, regulations, technology, etc) which must be mapped to enterprise concerns and objectives before being of any use. That translation of data into information may be done immediately by mapping data semantics to identified objects and processes; it may also be delayed, with rough data managed as such until being used at a later stage to build information.
From Data to Information
Information is meaningful, data is not. Even “facts” are not manna from heaven but have to be shaped from phenomena into data and then information, as epitomized by binary, fragmented, or “big” data.
Binary data are direct recording of physical phenomena, e.g sounds or images; even when indexed with key words they remain useless until associated, as non symbolic features, to identified objects or activities.
Contrary to binary data, fragmented data comes in symbolic guise, but as floating nuggets with sub-level granularity; and like their binary cousin, those fine-grained descriptions are meaningless until attached to identified objects or activities.
“Big” data is usually understood in terms of scalability, as it refers to lumps too large to be processed individually. It can also be defined as a generalization of fragmented data, with identified targets regrouped into more meaningful aggregates, moving the targeted granularity up the scale to some “overwhelming” level.
Since knowledge can only be built from symbolic descriptions, data must be first translated into information made of identified and structured units with associated semantics. Faced with “rough” (aka unprocessed) data, knowledge managers can choose between two policies: information can be “mined” from data using statistical means, or the information stage simply bypassed and data directly used (aka interpreted) by “knowledgeable” agents according to their context and concerns.
As a matter of fact, both policies rely on knowledgeable agents, the question being who are the “miners” and what they should know. Theoretically, miners could be fully automated tools able to extract patterns of relevant information from rough data without any prior information; practically, such tools will have to be fed with some prior “intelligence” regarding what should be looked for, e.g samples for neuronal networks, or variables for statistical regression. Hence the need of some kind of formats, blueprints or templates that will help to frame rough data into information.
Knowledge must be built from accurate and up-to-date information regarding external and internal state of affairs, and for that purpose information items must be managed according to their source, nature, life-cycle, and relevancy:
Source: Government and administrations, NGO, corporate media, social media, enterprises, systems, etc.
Nature: events, decisions, data, opinions, assessments, etc.
Type of anchor: individual, institution, time, space, etc.
Life-cycle: instant, time-related, final.
Relevancy: traceability with regards to business objectives, business operations, organization and systems management.
On that basis, knowledge management will have to map knowledge to its information footprint in terms of reliability (source, accuracy, consistency, obsolescence, etc) and risks.
From Information to Knowledge
Information is meaningful, knowledge is also useful. As information models, knowledge representations must first be anchored to persistency and execution units in order to support the consistency and continuity of surrogates identities (principle #1). Those anchors are to be assigned to domains managed by single organizational units in charge of ontological commitments, and enriched with structures, features, and associations (principle #2). Depending on their scope, structure or feature, semantics are to be managed respectively by persistent or application domains. Likewise, ontologies may target objects or aspects, the former being associated with structural sub-types, the latter with functional ones. Differences between information models and knowledge representation appear with rules and constraints. While the objective of information and control systems is to manage business objects and activities, the purpose of knowledge systems is to manage symbolic contents independently of their actual counterparts (principle #3). Standard rules used in system modelling describe allowed operations on objects, activities and associated information; they can be expressed forward or backward:
Forward (aka push) rules are conditions on when and how operations are to be performed.
Backward (aka pull) rules are constraints on the consistency of symbolic representations or on the execution of operations.
Assuming a continuity between information and knowledge representations, the inflection point would be marked by the introduction of modalities used to qualified truth values, e.g according temporal and fuzzy logic:
Temporal extensions will put time stamps on truth values of information.
Fuzzy logic put confidence levels on truth values of information.
That is where knowledge systems depart from information and control ones as they introduce a new theory of intelligent reasoning, one based upon the fluidity and volatility of knowledge.
Meanings are in the Hands of Beholders
Seen in a corporate context, knowledge can be understood as information framed by contexts and driven by purposes: how to run a business, how to develop applications, how to manage systems. Hence the dual perspective: on one hand information is governed by enterprise concerns, systems functionalities, and platforms technology; on the other hand knowledge is driven by business processes, systems engineering, and services management.
That provides a clear and comprehensive taxonomy of artifacts, to be used to build knowledge from lower layers of information and data:
Business analysts have to know about business domains and activities, organization and applications, and quality of service.
System engineers have to know about projects, systems functionalities and platform implementations.
System managers have to know about locations and operations, services, and platform deployments.
The dual perspective also points to the dynamics of knowledge, with information being pushed by the their sources, and knowledge being pulled by their users.
A Time for Every Purpose
As understood by Cybernetics, enterprises are viable systems whose success depends on their capacity to countermand entropy, i.e the progressive downgrading of the information used to govern interactions both within the organization itself and with its environment. Compared to architecture knowledge, which is organized according to information contents, knowledge architecture is organized according to functional concerns and information lifespan, and its objective is to keep internal and external information in synch:
Planning of business objectives and requirements (internal) relative to markets evolutions and opportunities (external).
Assessment of organizational units and procedures (internal) in line with regulatory and contractual environments (external).
Monitoring of operations and projects (internal) together with sales and supply chains (external).
That put meanings (that would be knowledge) in the hands of decision makers, respectively for corporate strategy, organization, and operations. Moreover, enterprises being living entities, lifespan and functional sustainability are meant to coalesce into consistent and homogenous layers:
Enterprise (aka business, aka strategic) time-scales are defined by environments, objectives, and investment decisions.
Organization (aka functional) time-scales are set by availability, versatility, and adaptability of resources
Operational time-scales are determined by process features and constraints.
Such a congruence of time-scales, architectures and purposes into Shearing Layers is arguably a key success factor of Knowledge management.
Search and Stretch
As already noted, knowledge is driven by purposes, and purposes, not being confined to domains or preserves, are bound to stretch knowledge across business contexts and organizational boundaries. That can be achieved through search, logic, and classification.
Searches collect the information relevant to users concerns (1). That may satisfy all the knowledge needs, or provide a backbone for further extension.
Searches can be combined with ontologies (aka classifications) that put the same information under new lights (1b).
Truth-preserving operations using mathematics or formal languages can be applied to produce derived information (2).
Finally, new information with reduced confidence levels can be produced through statistical processing (3,4).
For instance, observed traffic at toll roads (1) is used for accounting purposes (2), to forecast traffic evolution (3), to analyze seasonal trends (1b) and simulate seasonal and variable tolls (4).
Those operations entail clear consequences for knowledge management: As far as computational distances don’t affect confidence levels, truth-preserving operations are neutral with regard to KM. Classifications are symbolic tools designed on purpose; as a consequence all knowledge associated to a classification should remain under the responsibility of its designer. Challenges arise when confidence levels are affected, either directly or through obsolescence. And since decision-making is essentially about risks management, dealing with partial or unreliable information cannot be avoided. Hence the importance of managing knowledge along shearing layers, each with its own information life-cycle, confidence requirements, and decision-making rules.
From Knowledge Architecture to Architecture Capability
Knowledge architecture is the corporate central nervous system, and as such it plays a primary role in the support of operational and managerial processes. That point is partially addressed by Frameworks like Zachman whose matrix organizes Information System Architecture (ISA) along capabilities and design levels. Yet, as illustrated by the design levels, the focus remains on information technology without explicitly addressing the distinction between enterprise, systems, and platforms.
That distinction is pivotal because it governs the distinction between corresponding processes, namely business processes, systems engineering, and services managements. And once the distinction is properly established knowledge architecture can be aligned with processes assessment.
Yet that will not be enough now that digital environments are invading enterprise systems, blurring the distinction between managed information assets and the continuous flows of big data.
That puts the focus on two structural flaws of enterprise architectures:
The confusion between data, information, and knowledge.
The intrinsic discrepancy between systems and knowledge architectures.
Both can be overcame by merging system and knowledge architectures applying the Pagoda blueprint:
The alignment of platforms, systems functionalities, and enterprise organization respectively with data (environments), information (symbolic representations), and knowledge (business intelligence) would greatly enhance the traceability of transformations induced by the immersion of enterprises in digital environments.
Knowledge Representation & Profiled Ontologies
Faced with digital business environments, enterprise must sort relevant and accurate information out of continuous and massive inflows of data. As modeling methods cannot cope with the open range of contexts, concerns, semantics, and formats, looser schemes are needed, that’s precisely what ontologies are meant to do:
Thesaurus: ontologies covering terms and concepts.
Documents: ontologies covering documents with regard to topics.
Business: ontologies of relevant enterprise organization and business objects and activities.
Engineering: symbolic representation of organization and business objects and activities.
Ontologies: Purposes & Targets
Profiled ontologies can then be designed by combining that taxonomy of concerns with contexts, e.g:
Institutional: Regulatory authority, steady, changes subject to established procedures.
Professional: Agreed upon between parties, steady, changes subject to accords.
Corporate: Defined by enterprises, changes subject to internal decision-making.
Social: Defined by usage, volatile, continuous and informal changes.
Personal: Customary, defined by named individuals (e.g research paper).
Last but not least, external (regulatory, businesses, …) and internal (i.e enterprise architecture) ontologies could be integrated, for instance with the Zachman framework:
Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).
Using profiled ontologies to manage enterprise architecture and corporate knowledge will help to align knowledge management with EA governance by setting apart ontologies defined externally (e.g regulations), from the ones set through decision-making, strategic (e.g plate-form) or tactical (e.g partnerships).
Set between all-inclusive onslaught of data on one side, pervasive smart bots on the other side, information systems could lose their identity and purpose. And there is a good reason for that, namely the confusion between data, information, and knowledge.
As it happened aeons ago, ontologies have been explicitly though up to deal with that issue.
Whereas UML has been brought to existence by very wise men under very propitious skies, the initial enthusiasm and first successes have never been transformed into wider acceptance and customary usage; subsequent updates and extensions didn’t help and may even have triggered some anticlimax. More than fifteen years after its launch, the utilization of UML remains limited, both in breadth (projects developed) and depth (features effectively used). Moreover, the UML house is deeply divided and there isn’t much consensus among the few that use it comprehensively and consistently, principally to support domain specific languages (DSL).
Certainly, there must have been a wrong turn somewhere, possibly at the UML2 crossing when the OMG committee lost sight of users modeling needs and took the road to meta-models. Considering UML’s shrinking stamp and dwindling relevancy, that road appears more and more like a dead-end; but it may be still possible to get back on track and retrieve the Us of the UML: unified semantics for all and sundry users.
Where to Look
Whether on driving or back seats, respectively for model driven or agile methods, models are widely accepted as a necessary constituent of development processes. Nonetheless, and despite being the only official standard, UML standing appears to falter, up to be already seen as a cold case. As suggested by Ivar Jacobson (“The road ahead for UML“), one of its main drawback would be its lack of modularity with regard of users needs. If that flaw is to be fixed, the question is where to look: directly at language level, or at supporting mechanisms.
Given the broad consensus that surrounded the initial project, one should at first look for a sound and stable subset to be used as a backbone and fleshed out according specific contexts, purposes or users. As a matter of fact that is what stereotypes and profiles are meant to do, except that without a well-defined backbone of unambiguous constructs, the only possible outcomes are domain specific languages. So, one should first consider how the separation of concerns could be better supported by language constructs.
Separation of Concerns
Despite its roots in the Object Oriented paradigm, UML has demonstrated its adaptability to all and every method or domain. Unfortunately, being a Jack of all trades often means a master to none, and the use of UML is clearly frustrated by its versatility; that translates either into shallow usage of ambiguous semantics, or into extensions targeting specific domains or technologies.
On the ground, three mechanisms can be used to make for the lack of focus: stereotypes, views, and customization.
UML stereotyping mechanism support predefined constructs for problem (business objects and processes) or solution (system architecture and object design) spaces. Stereotypes can be grouped into profiles, e.g for specific business domains or technical architectures.
Views (or perspectives) organize access to models according contents: logical, physical, conceptual, pragmatic, etc
Tool customization organizes access to models according users purposes and skills: analyst, architect, designer, developer, etc.
While those approaches have their benefits, they are set independently of languages constructs, either as UML extensions (stereotypes and profiles), or defined from outside by development methodologies (views) or projects organization (customization). As a consequence, they have little or no effect on the simplicity or efficiency of UML; they may even add to confusion and complexity when overlapping stereotypes are introduced to support multiple taxonomies, e.g technical architectures and business domains.
That may point to a clear direction: given the potency of the stereotyping mechanism and its pivotal role in UML utilization, significant benefits could be achieved through a better integration into core language constructs, even if that entails some constraints or limitations. Two straight modifications should be considered:
Model layers: language constructs should be re-organized along architectural concerns for enterprise (business processes), system (functionalities), and platforms (components).
Stereotypes visibility: language constructs should support the distinction between local taxonomies and “unified” ones, the former set with limited scope and visibility, the latter meant to be applies across layers.
Given the growing intricacy, ubiquity and diversity of systems, UML complexity and versatility should clearly be in demand, and the problem is to harness those capabilities according the needs and skills of the different kinds of users.
That’s arguably a critical flaw of UML, which lumps together essential with secondary constructs, as well as definite with ambivalent semantics. That brings weighty consequences, both for users and models:
Steep or even abrupt learning curve: confronted to a wall of mixed constructs users have to master the whole upfront, whatever their needs and skills.
Blurred concerns: describing various specific contents with the same ambivalent constructs will either distort language semantics, or blur concerns specificities.
Corrupted transformation: whatever the modeling tools, the bad apples of inputs will usually corrupt the whole of outputs. In other words any advance in model driven development requires a sound backbone of unambiguous language constructs.
As noted above, language constructs can be regrouped along two perspectives, one directly associated with users architectural concerns, the other reflecting the scope and visibility of targeted artifacts. While there is no particular reason to match complexity levels with architectural concerns, mapping them to granularity has a clear rationale. Such a “born again” UML would distinguish between two levels of language constructs:
Those pertaining to objects and activities identified by architectures, whatever their nature: enterprise, systems or platforms.
Those used to describe internals of objects and activities independently of their aspects and behaviors at architecture level.
That re-configuration would bring modularity to the language, enabling a smooth learning curve. More importantly, a clear-cut separation of concerns will enable some kind of Just-In-Time model transformation: instead of cumulative noises (b), one will get separate transformations for models architectural backbone on one hand, contingent specificities on the other hand (a). And that could be a real game-changer for lean and fit models.
While that could be achieved by different means, a simple solution would be to use the stereotyping mechanism to describe supporting structures of enterprise, functional, and technical architectures.
Transformation vs Portability
Model transformation is about changing contents within the same environment, portability is about moving the same contents across different environments; and despite apparent similarities, they deal with different concerns, set by users for the former, by tools vendors for the latter.
Transformation is normally performed under a single corporate roof according agreed semantics; as a corollary, it is meant to cover the full contents of models. That’s not the case for portability, whose primary objective is the exchange of consolidated contents between heterogeneous environments; while sources and targets may have to share the whole of their models, a sound policy should make room for selective portability of specific or confidential contents.
The Meta-Object Facility (MOF) is the solution of choice for portability. As a meta-language it is used to describe language constructs at source and target environments; mapping rules can then be defined and bridges built between environments. As it is, those bridges usually scale very poorly due to the exponential complexity of rules having to cover all and every model idiosyncrasies; and that’s unfortunate for portability which, instead of focused targets, has to deal with overweight models cluttered with useless contents (b).
That situation would be greatly improved (a) if the wheat of consolidated constituents could be separated from the chaff of ambiguous or irrelevant contents. On a broader perspective that will open the way leaner and fitter models.
One step back may put UML back on track
There is something of a consensus among the software engineering community regarding (1) the benefits of models and (2) the failures of UML. As should be expected, that consensus translates into fragmented modeling practices and, more generally, software engineering methodologies. Obviously there isn’t much of a future for UML along that path, but the case is still open and the trend can be reversed by putting users needs back on UML driving seat.
All too often when the agile project model is discussed, the debate turns into a religious war with waterfall as the villain. But asking which project model will bring salvation will only bring attrition, because the question is not which is the best but when it is the best. It’s like asking if a hammer is a better tool than a sickle !
Instead, one should first try to understand how they stand apart, and deduce from that what they are best for; the comparison between the left and right sides of the brain may provide a good starting point.
B2C: Balancing Brain Capacities
If it is (still) impossible to know what people think, it is possible to know where their thinking is rooted in brains, and the answer is unequivocal:
The left side of the brain is analytical; faced with a problem, it looks for parts and process them in sequence.
The right side of the brain is better at synthesis as it looks at the whole and processes all relevant information simultaneously.
Obviously casts will differ between individuals depending on inborn qualities and developed preferences; moreover, each individual will balance his brain sides according to the task at hand. The same should apply when projects must decide between iterative and procedural approaches.
What a Hand can Hold
When project management is first considered, the Whole vs Parts alternative should be the discriminating factor: since human brains cannot process an unlimited number of elements simultaneously, work units to be handled by teams must be clearly circumscribed, with a number of independent functional units not exceeding a dozen.
That could be a pitfall for agile developments if iterations and increments were to be associated with an exponential growth of complexity. Yet, partitioning a large project into sub-tasks doesn’t necessarily call for a waterfall schema if the sub-tasks can be performed independently.
What the Hand is Told
Sequential processing can be dumb because the intelligence can already be etched in the sequences. That’s not the case if relevant information is to be picked out and processed as a whole; that can only be done with a clear purpose guiding the hand.
Replacing an administrative process by a collaborative one entails some kind of shared ownership, with teams granted full responsibility for decisions, schedules, and quality. Otherwise the different concerns, purposes or authorities, possibly but not necessary at odds, should be set apart as sub-tasks, and milestones introduced for their consolidation.
What is Handed Over
Development projects may handle three kind of artifacts: texts, models, and code, the first and last being mandatory, the second being optional. Since texts and code are best processed sequentially they are handed over to brain left side; conversely, models are meant to combine different perspectives, e.g structures and behaviors, which put them on the right side of the brain.
Curiously, that seems to put agile in some kind of conundrum: despite models being the symbolic representation best suited to holistic processing, agile approaches are partial to code, even if models are not explicitly ruled out. As a matter of fact, agile tenets are more partial to products than to code, and what is handed over and tested against requirements is not meant to be a program but a running application.
Hand in Hand
Just like the two parts of the brain bring their best to shared concerns and purpose, agile and phased schemes should be enlisted according to their respective merits and shortcomings:
Agile is clearly a better option when shared ownership can be secured and milestones and models are not needed.
Phased solutions (“waterfall” is a red-herring) are necessary when organizational, functional or technical dependencies between projects mean that some consolidation cannot be avoided between development process.
Assuming agile methods are used whenever possible, models should provide the glue when external dependencies are to be taken into account:
Organizational dependencies are managed across model layers: business requirements govern system functionalities which govern platforms implementations.
Functional dependencies are managed across architecture tiers: transient non shared components (aka boundaries) are governed by transient shared components (aka controls) which are governed by persistent shared components (aka entities).
Development dependencies should not cross projects limits as they should be managed at domain level using inheritance or delegation.
Model driven software engineering (aka MDA, aka MBSE, aka MDE) has had a great future for quite some time, yet there isn’t much consensus about what that could be and, in particular, about what kind of models should be in the driving seat.
Pending some agreement about model contents (e.g specific ?) and capabilities (e.g executable ?), the driving of software engineering processes will probably remain more practices than principles.
Models are shadows of reality, with their form and contents set by contexts and concerns. They can be characterized by their purpose and capabilities.
Regarding purpose, models fall in two groups: descriptive models deal with problems at hand, prescriptive models with solutions.
Descriptive models are partial and biased representations of actual contexts. Partial because they only deal with relevant objects, activities and features; biased because the selection is made on purpose.
Prescriptive models are complete descriptions of artifacts.
Regarding capabilities the distinction is between intensional and extensional languages:
Extensional (aka denotative) languages deal with sets of identified instances of objects and activities. As they condone partial or ambiguous statements, they are best used for descriptive models.
Intensional (aka connotative) languages deal with the semantics and constraints of symbolic representations. Due to their formal capabilities they are best used for prescriptive models.
Along that reasoning System Models can be characterized along architecture contexts: business processes (enterprise), functionalities (systems), and platform implementations (technologies):
Business models are descriptive and built with extensional languages (business is often said to bloom on discrepancies). They are necessarily partial as they target specific contexts and concerns.
Functional (aka analysis) models are prescriptive and built with intensional languages as they must specify the semantics and constraints of symbolic representations. Yet they are not necessarily complete since they don’t have to cover every details of business processes or implementations (cf traceability).
Implementation models (aka design) are prescriptive with no use for extensional capabilities since the relevant physical objects, i.e extensions, are system components directly derived from system specifications. However, they must support complete and formal descriptions of component features.
Models don’t Talk Alone
Models are built with logographic languages that should not be confused with phonetic ones: whereas the latter convey information sequentially, the former build semantics from different sources; that enables models to be read from different perspectives. Contrary to programs whose semantic is bound to a fixed sequential execution, models don’t talk alone, but must be chatted to. Even when their readings are sequential, the sequences are governed by readers, not by models.
That point is pivotal if model transformation, arguably a pillar of model driven development, is to be supported along different perspectives and according different concerns.
Besides, it must be noted that while models can be fully translated into (and reversed from) sequential representations (e.g with XMI), transcripts are just projections and should not be confused with models as such.
Models don’t Walk Alone
Like talks, walks are sequential as they advance step by step. Hence, while some models can be walked (aka executed), such walks are only instances of behaviors supported by models.
That should be especially clear for system models which describe architectures combining agents and devices as well as software components, contrary to programs which are limited to software components structure and sequential (or parallel) execution.
Just like XMI transcripts should not be confused with original models, “executable” models should not be confused with fully fledged ones because they simulate the behaviors of agents and devices as if they were software components. While that may be useful for models targeting software components within a given architecture, ignoring the specificities of software, agents, and devices would be pointless when the objective is to design a system architecture.
Defining models as abstractions may be correct but not very helpful when deciding what kind of abstraction should drive development processes. First, the question is to define how concerns and purposes should be used to define abstractions, in other words to set apart significant from non-significant information. Then, in order to avoid flights for always higher abstractions without burying models into ground details, some principles are needed regarding specialization and generalization.
When systems are considered, abstraction levels are set by enterprise organization, systems functionalities, and platform technologies, with concerns expressed by business, functional, or technical requirements. Given a hierarchy of concerns, models must be set at the proper level of abstraction: below that level part of information will be redundant or irrelevant; above, useful features and classifications may be overlooked as some information will be either wanting or will not discriminate between variants.
Such levels can be identified by selectively applying specialization and generalization to constrained hierarchies:
Inheritance should not cross model layers: hierarchies of business, functional and technical artifacts must be kept separate.
Structures and behaviors pertain to different concerns: abstractions of objects and aspects should be managed independently.
Specialization should be applied when subset of identified objects or behavior are associated with specific features. Generalization should be introduced when models must be consolidated.
Such an approach brings significant benefits for reuse, arguably one of the main objective of abstraction. And that appears clearly when developments are governed by models and architecture components and mechanisms are to be shared across model layers.
Driving Models and Roadmaps
Systems engineering has to meet three kinds of requirements: business needs, system functionalities, and components implementation. In a perfect world there would be one stakeholder, one architecture, and one time-scale. Unfortunately, projects may involve different stakeholders, target different architectures, and be set along different time-frames. In that case milestones and roadmaps are to be introduced in order to bring all expectations and commitments under a common roof, with models providing the glue between them. That may be achieved with models defined along MDA layers:
Computation Independent Models (CIMs) describe business objects and processes independently of supporting systems.
Platform Independent Models (PIMs) describe how business processes are supported by systems seen as functional black boxes, i.e disregarding the constraints associated to candidate technologies.
Platform Specific Models (PSMs) describe system components as implemented by specific technologies.
Interestingly, both extremities of development processes are textual descriptions, informal at the beginning, formal at the end, with models providing bridges in between. As noted above, those bridges are not always necessary: texts can be directly translated into instructions (as illustrated by voice command devices), or project teams with shared ownership can develop programs according users’ requirements (as promoted by Agile methods). Yet, the question should always be asked, and when textual requirements cannot be directly developed into programs the first task should be to draw the shortest modeling path.
Roadmaps and Meta-models
Model driven tools may appear under different guises yet most rely on meta-models and the Meta Object Facility (MOF). Given that meta-models are models of models, they are supposed to focus on a subset of relevant features selected on purpose, which, for driving models, would be some kind of road signs governing models transformation. What that could be? Two approaches are to be considered:
Language translation: as presented by the report of the Dagstuhl Seminar, meta-models can be designed according their generic transformation capabilities and used to single out language constructs in order to transfer model contents into another language.
Separation of concerns: as development advances and models take into account different concerns, meta-models can be used to monitor and control the selective processing of corresponding contents. That could be achieved if transformations were governed by traffic signs singling out relevant features and waving aside the leftovers.
Each option points to a different perspective, the former targeting tools providers, the latter aiming at modellers. Whereas MDA layers (for business, functionalities, and technology) clearly point to models built with the Unified Modeling Language according organizational, functional and technical concerns, most of current implementations follow the language option; while those tools may be (theoretically) open at technical level, they usually rely on domain specific languages. By narrowing functional scopes and relying on automated translation to bridge the gaps, solutions based upon domain specific languages can only provide local solutions to fragmented problems. That road could be a bumpy one for model driven engineering.
Reusing artifacts means using them in contexts that are different of their native ones. That may come by design, when specifications can anticipate on shared concerns, or as an afterthought, when initially unexpected similarities are identified later on.
Planned or opportunistic, reuse brings benefits in terms of costs, quality, and continuity:
Cost benefits are most easily achieved for component engineering but may also be obtained upstream with model reuse and patterns.
Quality benefits are first and foremost rooted at model level, especially when components implementation is supported by automated tools.
Continuity benefits are to be found both along the business (semantics and business rules) and engineering (functional architecture and platform implementations) perspectives.
Reuse policies may also bring positive externalities by inducing a comprehensive approach to software design.
Yet, those policies will usually entail costs and may as well bear negative externalities:
Artifacts designed for reuse are usually most costly to develop, even if part of additional costs should be ascribed to quality management.
Excessive enforcement policies may significantly hamper projects ability to meet business needs in time.
Managing reusable assets usually induces overheads.
In order to assess those policies, economics of reuse must be set across business, engineering or architecture perspectives:
Business perspective: how to factor out and reuse artifacts associated with the knowledge of business domains when system functionalities or platforms are modified.
Engineering perspective: how to reuse development artifacts when business contents or system platforms are modified.
Architecture perspective: how to use system components independently of changes affecting business contents or development artifacts.
That can be achieved by managing development assets along model driven architecture: CIMs for business and enterprise architecture, PIMs for systems functional architecture, and PSMs for systems technical architecture.
Contexts & Concerns
Whatever their inception, reused artifacts are meant to be used independently of their native context and concerns: opportunistic reuse will map a specific purpose to another one, planned reuse will map a shared concern to a specific purpose. As a corollary, reuse policies must be supported by some traceability mechanism linking concerns and purposes across contexts and architectures.
From the enterprise perspective, the problem is to reuse the knowledge of business domains and processes. For that purpose different mechanisms can be considered:
The simplest solution is to reuse generic components, with the business knowledge directly transferred through parameters.
Similarly, business rules can be separately edited and managed in business contexts before being executed by system components.
One step further, business semantics and rules can be fenced off with domains and applied to different objects and applications.
Finally, models of business objects and processes can be capitalized and managed as reusable assets.
Once business knowledge is duly capitalized as functional assets, they can be reused along the engineering perspective:
System functionalities: functional patterns are (re)used to map functional requirements to functional architecture, and services are (re)used to support business processes.
Software implementations: Component-based development and information hiding.
Along the architecture perspective information hiding is generalized to systems, and reuse is masked by the definition of services.
It must be noted that, contrary to misleading similarities, refactoring is the opposite of reuse: instead of building from well understood and safe artifacts, it tries to extract some reusable chunks from opaque and unsafe components.
Enterprise and business knowledge may affect the full scope of system functionalities: boundaries (e.g users authorizations), controls (e.g accounting rules), and persistency (e.g consistency constraints). Whereas there isn’t much to argue about the benefits of reusing enterprise and business knowledge, costs may significantly diverge depending on the way corresponding assets are managed:
Domain specific knowledge are rooted at requirements level. That’s typically achieved when use cases are introduced to describe how systems are meant to support business processes. With different use cases targeting different aspects of the same business objects and processes, overlaps must be identified and factored out in order to be reused across processes.
Business knowledge may also be global, i.e shared at enterprise architecture level, defining objectives, assets and organization associated with the continuity of corporate identity and business capabilities within a regulatory and market environment.
In any case, the challenge is to map business knowledge to system models, more precisely to embed reused descriptions of business objects and process to corresponding development artifacts. At architecture level the mapping should target objects or processes identities, at domain level the focus will be on aspects and views.
As epitomized by service oriented architectures, business architectures can be mapped to system ones through delegation, either directly (business processes calling on services), or indirectly (collaboration between services). That will establish a clear distinction between shared (aka global) and domain specific knowledge, and consequently between respective economics of reuse.
Given that shared knowledge must be reused across domains and applications, there can be no argument about benefits. That will be achieved by a messaging model built on a need-to-know basis. And since such model is an intrinsic feature of the functional architecture, it incurs no additional overheads.
Specific knowledge for its part is managed at domain level and therefore masked behind services interfaces. Whatever reuse occurs there remains local and an intrinsic part of domain design.
Things are not so clear when business knowledge is not managed by services but distributed across domains, mixing specific and global knowledge. Managing reusable assets would be easy were the distinction between business and functional requirements to coincide with the one between shared and specific knowledge; unfortunately that’s seldom the case, and requirements, functional or business, will have to be sorted out at architecture or design level.
Whereas Service Oriented Architectures (SOA) put functionalities in the driving seat, Enterprise Application Integration (EAI) gives the lead to applications for which it provides adapters. As maintenance of integration adapters is a very poor substitute for knowledge management, reuse is mostly limited to legacy applications.
At design level knowledge is weaved into canonical data models (entities), functional architectures (controls), and user interfaces frameworks (boundaries).
On one hand, tracing reuse to requirements may be problematic as they are by nature concrete and unstructured, hence not the best support for generalization or the factoring out of shared features. Assuming business analysts can nonetheless separate reusable wheat from specific chaff, knowledge management at this level will require a dedicated framework supporting comprehensive and differentiated traceability. Additional overheads will have to be taken into account and compared to potential benefits.
On the other hand, canonical data models and functional architectures are meant to provide unified views of shared objects and semantic domains. Yet, canonical models are by nature unwieldy as they carve structures, features, and connections of business objects, without clear mechanisms to combine shared and specific knowledge. As a corollary, their use may reduce flexibility, and their management usually induces significant overheads.
With enterprise and business knowledge capitalized as development assets, the engineering case for reuse may appears indisputable, but business cases are often much more controversial due to large overheads and fleeting returns. Taking cues from Barry Boehm (“Managing Software Productivity and Reuse”), here are some of the main pitfalls of artifacts reuse:
Repository delusion: knowledge being by nature contextual, its reuse is driven by circumstances and purposes; as a consequence the availability of large repositories of development assets will probably be ignored without clear pointers rooted in contexts and concerns.
Confusion between components (or structures) and functionalities (or interfaces): under the influence of the object oriented paradigm, the distinction between objects and aspects is all too often forgotten. That’s unfortunate as this difference is congruent with the one between business objects on one hand, business operations on the other hand.
Over generalization: reuse is usually achieved by factoring out useful aspects or factoring off useless ones. In both cases the temptation is to repeat the operation until nothing could be added to the scope. Such “flight for abstraction” will inevitably overtake the proper level of reuse and begets models void of any anchor to business relevancy.
Scalability: while reuse is about separation of concerns and complexity management, those two criteria don’t have to pull in the same direction. When they don’t, variants will be dispersed across artifacts and their processing will suffer a combinatorial explosion if the system has to be scaled up.
Obsolescence: shelf lives of development assets can be defined by each or both business or technical relevancy. Assuming spans either coincide or are managed independently, they should be explicitly taken into account before any reuse.
Those obstacles can be managed providing that models:
Provide some built-in traceability mechanisms between knowledge and artifacts.
Sustainable system development is the ability to meet present business requirements while enhancing system capability to support future ones. Clearly reuse is not the only factor of sustainability, with architectures, returns, and risk management being pivotal. But the economics of reuse encompass most of other factors.
Architectures are clearly first to be considered, as epitomized by MDA:
Reuse of development assets rooted in enterprise architecture is not an option: system functionalities are meant to support business processes as they are (a).
At the other end of the development process, reuse of software designs and components across technical architectures should bring benefits in quality and costs (c).
In between reuse of system functionalities is necessary to guarantee the robustness and continuity of functional architectures; it should also leverage the benefits of reuse of enterprise and development assets (b).
Regarding returns, reuse through generic components, rules engines, or semantic domains can be directly supported by development tools, bypassing explicit models of functional architectures. That makes their costs/benefits analysis both simpler and well circumscribed. That is not the case for system functionalities which stand at the hub of perspectives. As a consequence, costs/benefits should be analyzed as a whole:
Regarding business assets, a clear distinction must be maintained between specific and shared knowledge, reuse being considered for the latter only.
Regarding the reuse of business assets as functional ones, services clearly offer the best returns. Otherwise costs/benefits are to be assessed, from reuse of vocabulary and semantics domains (straightforward, limited overheads), to canonical models and enterprise application integration (contingent, significant overheads).
Economics of reuse will ultimately be set by traceability mechanisms linking enterprise and business knowledge on one hand, components designs on the other hand. Even for services (c), if at a lesser degree, the business case for reuse will be decided by leveraged benefits and non cumulative costs. Hence the importance of maintaining the distinction between identified structures and associated aspects from business (a) and functional (b) requirements, to components interfaces (d) and structures (e).
Finally, reuse may also play a significant role in risk management, especially when risks are managed according to their source:
Changes in business contexts can usually occurs along two frequency waves: short, for market opportunities, and long, for an organization’s continuity. Associated risks could be better managed if corresponding knowledge were managed accordingly.
System architectures are meant to evolve in synch with organization continuity; were technological environment or corporate structures subject to unexpected changes, reusable functional assets would be of great help.
Given that enterprise IT can no longer be self-contained and operate in isolation, reusable designs may provide buffers to technological risks and help exploiting unexpected business opportunities.