Modernization & The Archaeology of Software

The past is not dead. In fact, it’s not even past. –
William Faulkner

Objective

Retrieving legacy code has something to do with archaeology as both try to retrieve undocumented artifacts and understand their initial context and purpose. The fact that legacy code is still well alive and kicking may help to chart their structures and behaviors, but it may also confuse the rationale of initial designs.

ccc
Legacy Artifact: Was the bowl intended for turkeys ? at thanksgiving ? Why the worm ?

Hence the importance of traceability and the benefits of a knowledge based approach to modernization organized along architecture layers (enterprise, systems, platforms), and processes (business, engineering, supporting services).

Model Driven Modernization

Assuming that legacy artifacts under consideration are still operational (otherwise re-engineering would be pointless), modernization will have to:

  • Pinpoint the deployed components under consideration (a).
  • Identify the application context of their execution (b).
  • Chart their footprint in business processes (c).
  • Define the operational objectives of their modernization (d).
  • Sketch the conditions of their (re)engineering (e) and the possible integration in the existing functional architecture (f).
  • Plan the re-engineering project (g).
RefactorModern_Roadmap
Modernization Road Map

Those objectives will usually not be achieved in a big bang, wholly and instantly, but progressively by combining increments from all perspectives. Since the different outcomes will have to be managed across organizational units along multiple engineering processes, modernization would clearly benefit from a model based approach, as illustrated by MDA modeling layers:

  • Platform specific models (PSMs) should be used for collecting legacy artifacts and mapping them to their re-engineered counterparts.
  • Since platform independent models (PIMs) are meant to describe system functionalities independently of implementations,  they should be used to consolidate the functionalities of legacy and re-engineered artifacts.
  • Since computation independent models (CIMs) are meant to describe business processes independently of supporting systems, they should be used to reinstate, document, and validate re-engineered artifacts within their business context.
RefactorModern_Layers
Model Driven Modernization

Corresponding phases can be expressed using the archaeology metaphor: field survey and collection (>PSMs), analysis (PSMs/PIMs), and reconstruction (CIMs/PIMs).

Field Survey

The objective of a field survey is to circumscribe the footprint of the modernization and collect artifacts under consideration:

  • Given targeted business objects or activities, the first step is to collect information about locations, distribution and execution dependencies.
  • Sites can then be searched and executable files translated into source ones whose structure and dependencies can be documented.
  • The role of legacy software can then be defined with regard to the application landscape .
RefactorModern_Field
Analysis (with regard to presentation, control, persistency, and services)

It must be noted that field survey and collection deal with the identification and restoration of legacy objects without analyzing their contents.

Analysis

The aim of analysis is to characterize legacy components, first with regard to their architectural features, then with regard to functionalities. Basic architectural features take into account components’ sharing and life-cycle.

The analysis of functionalities can be achieved locally or at architecture level:

  • Local analysis (a) directly map re-factored applications to specific business requirements, by-passing functional architecture. That’s the case when targeted applications can be isolated, e.g by wrapping legacy code.
  • Global analysis (b) consolidate newly supported applications with existing ones within functional architecture, possibly with new functionalities.
RefactorModern_Analysis
Analysis (with regard to presentation, control, persistency, and services)

It must be noted that the analysis of legacy components, even when carried out at functional architecture level, takes business processes as they are.

Reconstruction

The aim of reconstruction is to set legacy refactoring within the context of enterprise architecture. That should be done from operational and business perspectives:

  • As the primary rationale of modernization is to deal with operational  problems or bottlenecks, its benefits should be fully capitalized at enterprise level.
  • Re-factored applications usually make room for improvements of users’ experience; that may bring about further changes in organization and business processes.
RefactorModern_Reconstruct
Reconstruction

Hence, modernization is not complete until potential benefits of re-factored applications are considered, for business processes as well as for functional architecture.

From Workshops to Workflow

As noted above, modernization can seldom be achieved in a big bang and should be planned as a model based engineering process. Taking a leaf from the MDA book,  such a process would be organized across four workshops:

  • Technical architecture (deployment models): that’s where legacy components are collected, sorted, and documented.
  • Software architecture (platform specific models): where legacy components are put in local context.
  • Functional architecture (platform independent models): where legacy components are put in shared context.
  • Enterprise architecture (computation independent models): where legacy components are put into organizational context.
RefactorModern_Workshops
Modernization & MBSE Workshops

Those workshops would be used to manage the outcomes of the modernization workflow:

  1. Collect and organize legacy code; translate into source files.
  2. Document legacy components.
  3. Build PSMs according basic architecture functional patterns.
  4. Map to PIMs of system functional architecture.
  5. Consolidate enterprise architecture.
RefactorModern_Engineer
Modernization Workflow

With the relevant workflows defined in terms of model-based systems engineering, modernization can be integrated with enterprise architecture.

Further Reading

External Links

Enterprise Governance & Knowledge

Knowledgeable Processes

While turf wars may play a part, the debate about Enterprise and Systems governance is rooted in a more serious argument, namely, how the divide between enterprise and systems architectures may affect decision-making.

antin-art-2007-judgment-of-paris.jpg
Informed Decision_making (Eleanor Antin)

The answer to that question can be boiled down to four guidelines respectively for capabilities, functionalities, visibility, and uncertainty.

Architecture Capabilities

From an architecture perspective, enterprises are made of human agents, devices, and symbolic (aka information) systems. From a business perspective, processes combine three kinds of tasks:

  • Authority: deciding how to perform processes and make commitments in the name of the enterprise. That can only be done by human agents, individually or collectively.
  • Execution: processing physical or symbolic flows between the enterprise and its context. Any of those can be done by human agents, individually or collectively, or devices and software systems subject to compatibility qualifications.
  • Control: recording and checking the actual effects of processes execution. Any of those can be done by human agents, individually or collectively, some by software systems subject to qualifications, and none by devices.

Hence, and whatever the solutions, the divide between enterprise and systems will have to be aligned on those constraints:

  • Platforms and technology affects operational concerns, i.e physical accesses to systems and the where and when of processes execution.
  • Enterprise organization determines the way business is conducted: who is authorized to what (business objects), and how (business logic).
  • System functionalities sets the part played by systems in support of business processes.
Capabs_L1
Enterprise Architecture Capabilities

That gives the first guideline of systems governance:

Guideline #1 (capabilities): Objectives and roles must be set at enterprise level, technical constraints about deployment and access must be defined at platform level, and functional architecture must be designed as to get the maximum of the former subject to the  latter’s constraints.

Informed Decisions: The Will to Know

At its core, enterprise governance is about decision-making and on that basis the purpose of systems is to feed processes with the relevant information so that agents can be put it to use as knowledge.

Those flows can be neatly described by crossing the origin of data (enterprise, systems, platforms) with the processes using the information (business, software engineering, services management):

  • Information processing begins with data, which is no more than registered facts: texts, numbers, sounds, visuals, etc. Those facts are collected by systems through the execution of business, engineering, and servicing processes; they reflect the state of business contexts, enterprise, and platforms.
  • Data becomes information when comprehensively and consistently anchored to identified constituents (objects, activities, events,…) of contexts, organization, and resources.
  • Information becomes knowledge when put to use by agents with regard to their purpose: business, engineering, services.
ccc
Information processing: from data to knowledge and back

Along that perspective, capabilities can be further refined with regard to decision-making.

  • Starting with business logic one should factor out decision points and associated information. That will determine the structure of symbolic representations and functional units.
  • Then, one may derive decision-making roles, together with implicit authorizations and access constraints. That will determine the structure of I/O flows and the logic of interactions.
  • Finally, the functional architecture will have to take into account synchronization and deployment constraints on events notification to and from processes.
Who should know What and When
Who should know What, Where, and When

That can be condensed into the second guideline of system governance:

Guidelines #2 (functionalities): With regard to enterprise governance, the role of systems is to collect data and process it into information organized along enterprise concerns and objectives, enabling decision makers to select and pull relevant information and translate it into knowledge.

Qualified Information: The Veils of Ignorance

Ideally, decision-making should be neatly organized with regard to contexts and concerns:

  • Contexts are set by architecture layers: enterprise organization, system functionalities, platforms technology.
  • Concerns are best identified through processes: business, engineering, or supporting services.
Qualified Information Flows across Architectures and Processes
Qualified Information Flows across Architectures and Processes

Actually, decisions scopes overlap and their outcomes are interwoven.

While distinctions with regard to contexts are supposedly built-in at the source (enterprise, systems, platforms), that’s not the case for concerns whose distinction usually calls for reasoned choices supported by different layers of governance:

  • Assets: shared decisions whose outcome bears upon several business domains and cycles. Those decisions may affect all architecture layers: enterprise (organization), systems (services), or platforms (purchased software packages).
  • Users’ Value: streamlined decisions governed by well identified business units providing for straight dependencies from enterprise (business requirements), to systems (functional requirements) and platforms (users’ entry points).
  • Non functional: shared decisions about scale and performances affecting users’ experience (organization),  engineering (technical requirements), or resources (operational requirements).
Separation of Concerns and Requirements Taxonomy
Qualified Information and Decision Making

As epitomized by non functional requirements, those layers of governance don’t necessarily coincide with the distinction between business, engineering, and servicing concerns. Yet, one should expect the nature of decisions be set prior the actual decision-making, and decision makers be presented with only the relevant information; for instance:

  • Functional requirements should be decided given business requirements and services architecture.
  • Scalability (operational requirements) should be decided with regard to enterprise’s objectives and organization.

Hence the third guideline of system governance:

Guideline #3 (visibility): Systems must feed processes with qualified information according to contexts (business, organization, platforms) and governance level (assets, user’s value, operations) of decision makers.

Risks & Opportunities: Mining Beyond Visibility

Long secure behind organizational and technical fences, enterprises must now navigate through open digitized business environments and markets. For business processes it means a seamless integration with supporting applications; for corporate governance it means keeping track of risks and opportunities in changing business contexts while assessing the capability of organizations and systems to cope, adjust, and improve.

On one hand risks and opportunities take root beyond the horizon and are not supposed to square with established information models; but on the other hand deep learning technologies is revolutionizing data analytics. purpose ontologies could be used to bring architectures and environments modeling into a common paradigm.

Hence the fourth guideline of system governance:

Guideline #4 (uncertainty): assuming that business edge is built on unqualified information about risks or opportunities, explicit and implicit knowledge should be processed within shared conceptual frames; that can be achieved with ontologies.

Further Reading

Enterprise Architectures & Separation of Concerns

Systems are more than Software

As long as information was just data files and systems just computers, the respective roles of  enterprise and IT architectures could be overlooked; but that has become a hazardous neglect with distributed systems pervading every corner of enterprises, monitoring every motion, and sending axons and dendrites all around to colonize environments.

Enterprise Governance and Separation of Concerns (R.Magritte)

Yet, the overlapping of enterprise and systems footprints doesn’t mean they should be merged, as a matter of fact, that’s the opposite. When the divide between business and technology concerns was clearly set, casual governance was of no consequence; now that turfs are more and more entwined, dedicated policies are required lest decisions be blurred and entropy escalates as a consequence. The need of better focus is best illustrated by the sundry meanings given to “system”, from computers running software to enterprises running businesses, and even school of thought.

Concerns in Perspectives

As far as enterprises are concerned, systems combine human beings, devices, and software components.

From a functional perspective, their capabilities are best defined by their interactions with their environment as well as between their constituents:

  • Users are supposed to be actual agents granted with organizational status and responsibilities, and possessing symbolic communication capabilities.
  • Software components and actual devices cannot be granted with  organizational status or responsibilities; the former come with symbolic communication capabilities, the latter without.

Assuming that interactions are governed by information, the objective is to understand the contribution of each type of component with regard to information processing and decision-making.

Catap_AgtsDevSyst
System = Agents + Devices + Symbolic representations

From an engineering perspective, the building of systems is all too often reduced to the development of their software constituents. As a consequence, the complexity of the forest is masked by the singularity of the trees:

  • The business value of applications is assessed locally instead of being driven by enterprise objectives and organizational constructs.
  • System models are confused with the programs used to produce software components.
  • Requirements life cycle, governed by the time-span of business contexts and objectives, is confused with the cycles of reuse of architecture assets.

Since engineering agenda are supposed to support business objectives, their decision-making processes must be aligned yet managed independently. That reasoning also applies to services management whose role is to adjust resources and software releases to operational needs.

Enterprise architectures can then be described as a cross between architecture assets (business objects, organization, technical architecture) on one hand, core processes for business, engineering and services management on the other hand.

Knowa_Processes
EA Artifacts at crossroads between Architectures and Activities

Information is to provide the glue between architecture assets and supported processes.

Information and Architectures Levels

As understood by Cybernetics (see Stafford Beer, “Diagnosing the System for Organizations“), enterprises are viable systems whose success depends on their capacity to countermand entropy, i.e the progressive downgrading of the information used to govern interactions between systems and their environment. And that put knowledge management at the core of systems capabilities.

Knowledge is best defined as information put to use, with information obtained by adding references and sense to data. That blueprint is supposed to be repeated at all architecture levels:

  • At enterprise level facts pertaining to the conduct of business are captured from environments before being organized into information meant to support enterprise governance.
  • System level deals with symbolic descriptions of functionalities (information). At this level data is irrelevant, the objective being the consolidation and reuse of shared representations and patterns (knowledge).
  • The technical level is in charge of operational governance:  software components, platforms capabilities, technical resources, communication mechanisms, etc. On one hand contexts are to be monitored and data translated into information; one the other hand operational decisions have to be made (knowledge) and executed which means information translated into data.
Know_EA_Layers
Architecture Layers and Processing Capabilities

If architecture levels can be characterized by processing capabilities, their engineering must be aligned to the corresponding objectives and time frames.

System Engineering and Separation of Concerns

As demonstrated time and again by blame games around projects failures, enterprise and technical concerns are poor engineering bedfellows, and with good reasons: different contexts, concerns, skills, and time scales. Faced with the challenge of bringing enterprise and development perspectives under a single governance, engineering approaches generally follow one of two basic options:

  • Phased projects (P) give precedence to software development, with requirements supposed to be set at inception and acceptance performed at completion.
  • Agile projects (A) give precedence to business requirements, with development iterations combining specifications, programming, and acceptance.
Know_EA_Agi_Pha
Phased (P) and Agile (A) development models

While each approach has its merits, agile for complex but well circumscribed projects, phased for large projects with external organizational dependencies, the difficulties each may encounter point to the crux of the matter, namely the separation of concerns between business goals and information technology,  bypassed by agile approaches and misplaced by phased ones:

  • Agile development models are driven by users’ value and based on the implicit assumption that business and technology concerns can be dealt with continuously and simultaneously. That may be difficult when external dependencies cannot be avoided and shared ownership cannot be guaranteed.
  • Phased development models take a mirror position by assuming that business concerns can be settled upfront, which is clearly a very hazardous policy.

Those pitfalls may be overcome if engineering processes take into account the distinction between knowledge management on one hand, software development on the other hand.

  • Knowledge management (KM) encompasses all information pertaining to the conduct of business: environment, markets, objectives, organization, and projects. With regard to engineering projects, its role is to define and consolidate the descriptions of symbolic representations to be supported by information systems.
  • Software development (SD) starts with symbolic descriptions and proceeds with the definition, building and acceptance of the corresponding software artifacts.
  • Service management (SM) provides the bridge between engineering and operational processes.

Capture of information from data and legacy code can also be achieved respectively by data mining and reverse engineering.

Know_EA_KM_SD
System Engineering = Knowledge (KM) + Software (SD) + Service (SM)

Depending on organizational or technical dependencies, knowledge management and software development will be carried out within integrated development cycles (agile processes), or will have to be phased in order to consolidate the different concerns.

That engineering distinction neatly coincides with the functional divide of architectures: knowledge management supporting enterprise architecture, software development supporting system architecture.

Architectures and Engineering Processes

Business processes are governed by collective representations mixing goals, models and rules,  not necessarily formally defined, and subject to change with opportunities. Engineering processes for their part have to be materialized through work units, models, and products, all of them explicitly defined, with limited room for change, and set along constraining schedules. Given that systems’ fate hangs on the hing between business and engineering concerns, the corresponding perspectives must be properly aligned.

That can be achieved if workshops are associated with architecture levels, and work units  defined according model layers  and development flows:

  • Knowledge management takes charge of symbolic descriptions pertaining to enterprise concerns. Its objective is to map business and operational requirements with the functionalities of supporting systems. Expressed in MDA parlance, the former would be described by computation independent models (CIMs), the latter by platform independent models (PIMs).
  • Software development deals with software artifacts. That encompasses the consolidation of symbolic descriptions into functional architectures, the design of software components according to platforms specificity (PSMs), and the production of code according to deployment targets (DPMs).
  • IT Service Management is the counterpart of knowledge management for actual operations and resources. Its objective is to synchronize business and development time-frames and align operational requirements with releases and resources.
workshopsTasksEA
Knowledge Management (KM), Software Development (SD), Services Management (SM)

That congruence between architecture divides (enterprise, systems, technology), models layers (CIMs, PIMs, PSMs, DPMs), and engineering concerns (facts or legacy, information, knowledge), provides a reasoned and comprehensive framework for enterprise architectures.

Architecture Capabilities & Separation of Concerns

Architectures describe stocks of shared assets, processes describe flows of changes. Given a hierarchized description of architectures, the objective is to ensure the traceability of concerns and decisions across levels and processes.

The first step is to anchor requirements to architecture capabilities:

  • Who: enterprise roles, system users, platform entry points.
  • What: business objects, symbolic representations, objects implementation.
  • How: business logic, system applications, software components.
  • When: processes synchronization, communication architecture, communication mechanisms.
  • Where: business sites, systems locations, platform resources.

The rationale of objectives and decisions (the “Why” of Zachman framework) is expressed by dependency links according to the nature of primary factors:

  • Deontic dependencies are set by external factors, e.g regulatory context, communication technologies, or legacy systems.
  • Alethic (aka modal) dependencies are set by internal policies, based on the assessment of options regarding objectives as well as solutions.
Capabs_Arch_Procs
Architecture Capabilities and Processes, with deontic (basic) and alethic (dashed) dependencies.

This distinction between dependencies is critical for decision-making and consequently for enterprise governance. Set within time-frames and decorated with time related features, those dependencies can then be consolidated into differentiated strategies for business, engineering and operational processes.

Given that dependencies are usually interwoven, governance must be aligned with the footprints of associated decisions:

  • Assets: shared decisions affecting different business processes (organization), applications (services), or platforms (purchased software packages). 
  • Users’ Value: streamlined decisions governed by well identified business units providing for straight dependencies from enterprise (business requirements), to systems (functional requirements) and platforms (users’ entry points).
  • Non functional: shared decisions about scale and performances affecting users’ experience (organization),  engineering (technical requirements), or resources (operational requirements).
Know_EA_Groupcam
Separation of Concerns and Requirements Taxonomy

That classification can be used to choose a development model:

  • Agile approaches should be the option of choice for requirements neatly anchored to users’ value.
  • Phased approaches should be preferred for projects targeting shared assets across architecture levels. When shared assets are within the same level epic-like variants of agile may also be considered.

Architectures, Processes, and Governance

While there is some consensus about the scope and concerns of Enterprise Architecture as a discipline, some debate remains about the relationship between enterprise and IT governance.

Beyond turf quarrels, arguments are essentially rooted in the distinction between information and supporting systems or, more generally, between organization and IT.

To some extent, those arguments can be ironed out if governance were set with regard to scope, actual or symbolic:

  • Actual scope deals with current or planned business objects, assets, and processes.
  • Symbolic scope deals with the design of the corresponding software components.
EA_emerg0
What is to be governed: actual and symbolic scopes

That distinction matches the divide between enterprise and systems architectures: one set of models deals with enterprise objectives, assets, and organization, the other one deals with system components.

With regard to processes, governance should distinguish between business and engineering:

  • Business processes are clearly designed at enterprise level, both for their organization and the symbolic description of system representations.
  • Software engineering ones are under the responsibility of IT governance, from system and software architecture to release and deployment.
EmergA_PvsA
Process and architecture perspectives for Governance

While governance could be shared, there is no reason to assume immediate and continuous alignment of perspectives; that could only be achieved through architecture perspective:

  • Actual architectures encompass business organization (locations, agents, devices) and systems deployments.
  • Symbolic architectures include systems functionalities and software development.

And, as a mirror achievement, processes take charge of transitions between actual and symbolic architectures:

EmergA_GAP
Processes carry out Architectures alignment (Pb,Pe), Architectures support Processes consistency (As, Ap).
  • Symbolic architectures provide the mediation between business requirements and software development  (As).
  • Physical architectures do the same between software development and services management (Ap).
  • Business processes take responsibility for the mapping of enterprise architecture into symbolic representations (Pb).
  • Engineering processes do the same for the alignment of software and deployment architectures (Pe).

Enterprise and IT governance can then be defined in terms of Knowledge management, with engineering and business processes gathering data from their respective realms, sharing the information through architectures, and putting it to use according their respective concerns.

Further Reading

 

External Links

Architectures of Knowledge

“Nothing that is worth knowing can be taught” 

Oscar Wilde

Objective

As illustrated by he Symbolic Systems Program (SSP) at Stanford University, advances in computing and communication technologies bring information and knowledge systems under a single functional roof, namely the processing of symbolic representations.

Information and Knowledge:  Acquisition, Use and Reuse (R. Doisneau)

Within that understanding one will expect Knowledge Management to shadow systems architectures and concerns: business contexts and objectives, enterprise organization and operations, systems functionalities and technologies. On the other hand, knowledge being by nature a shared resource of reusable assets, its organization should support the needs of its different users independently of the origin and nature of information. Knowledge Management should therefore bind knowledge of architectures with architecture of knowledge.

Knowledge Representation

In  their pivotal article Davis, Shrobe, and Szolovits set five principles for knowledge representation:

  1. Surrogate: KR provides a symbolic counterpart of actual objects, events and relationships.
  2. Ontological commitments: a KR is a set of statements about the categories of things that may exist in the domain under consideration.
  3. Fragmentary theory of intelligent reasoning: a KR is a model of what the things can do or can be done with.
  4. Medium for efficient computation: making knowledge understandable by computers is a necessary step for any learning curve.
  5. Medium for human expression: one the KR prerequisite is to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.
Representation_Mutilation
Surrogates without Ontological Commitment

The only difference is about coupling: contrary to knowledge systems, information and control ones play a role in their context, and operations on surrogates are not neutral.

Knowledge Archeology

Knowledge constructs are empty boxes that must be properly filled with facts. But facts are not given but must be observed, which necessarily entails some observer, set on task if not with vested interests, and some apparatus, natural or made on purpose.  And if they are to be recorded, even “pure” facts observed through the naked eyes of innocent children will have to be translated into some symbolic representation. Taking wind as an example, wind socks support immediate observation of facts, free of any symbolic meaning. In order to make sense of their behaviors, wanes and anemometers are necessary, respectively for azimuth and speed; but that also requires symbolic frameworks for directions and metrics. Finally, knowledge about the risks of strong winds can be added when such risks must be considered.

windMonit
Fact, Information, Knowledge

As far as enterprises are concerned, knowledge boxes are to be filled with facts about their business context and processes, organization and applications, and technical platforms. Some of them will be produced internally, others obtained from external sources, but all should be managed independently of specific purposes. Whatever their nature (business, organization or systems), information produced by the enterprises themselves is, from inception, ready to use, i.e organized around identified objects or processes, with defined structures and semantics. That’s not necessarily the case with data reflecting external contexts (markets, regulations, technology, etc) which must be mapped to enterprise concerns and objectives before being of any use. That translation of data into information may be done immediately by mapping data semantics to identified objects and processes; it may also be delayed, with rough data managed as such until being used at a later stage to build information.

Knowa_cycle
From Data to Knowledge

From Data to Information

Information is meaningful, data is not. Even “facts” are not manna from heaven but have to be shaped from phenomena into data and then information, as epitomized by binary, fragmented, or “big” data.

  • Binary data are direct recording of physical phenomena, e.g sounds or images; even when indexed with key words they remain useless until associated, as non symbolic features, to identified objects or activities.
  • Contrary to binary data, fragmented data comes in symbolic guise, but as floating nuggets with sub-level granularity; and like their binary cousin, those fine-grained descriptions are meaningless until attached to identified objects or activities.
  • “Big” data is usually understood in terms of scalability, as it refers to lumps too large to be processed individually. It can also be defined as a generalization of fragmented data, with identified targets regrouped into more meaningful aggregates, moving the targeted granularity up the scale to some “overwhelming” level.

Since knowledge can only be built from symbolic descriptions, data must be first translated into information made of identified and structured units with associated semantics. Faced with “rough” (aka unprocessed) data, knowledge managers can choose between two policies: information can be “mined” from data using statistical means, or the information stage simply bypassed and data directly used (aka interpreted) by “knowledgeable” agents according to their context and concerns.

Non symbolic data can be interpreted by “knowledgeable” agents according to their concerns.

As a matter of fact, both policies rely on knowledgeable agents, the question being who are the “miners” and what they should know. Theoretically, miners could be fully automated tools able to extract patterns of relevant information from rough data without any prior information; practically, such tools will have to be fed with some prior “intelligence” regarding what should be looked for, e.g samples for neuronal networks, or variables for statistical regression. Hence the need of some kind of formats, blueprints or templates that will help to frame rough data into information.

Information Properties

Knowledge must be built from accurate and up-to-date information regarding external and internal state of affairs, and for that purpose information items must be managed according to their source, nature, life-cycle, and relevancy:

  • Source: Government and administrations, NGO, corporate media, social media, enterprises, systems, etc.
  • Nature: events, decisions, data, opinions, assessments, etc.
  • Type of anchor: individual, institution, time, space, etc.
  • Life-cycle: instant, time-related, final.
  • Relevancy: traceability with regards to business objectives, business operations, organization and systems management.
Knowa_canon
Information must be timely, understandable, and relevant

On that basis, knowledge management will have to map knowledge to its information footprint in terms of reliability (source, accuracy, consistency, obsolescence, etc) and risks.

From Information to Knowledge

Information is meaningful, knowledge is also useful. As information models, knowledge representations must first be anchored to persistency and execution units in order to support the consistency and continuity of surrogates identities (principle #1). Those anchors are to be assigned to domains managed by single organizational units in charge of ontological commitments, and enriched with structures, features, and associations (principle #2). Depending on their scope, structure or feature, semantics are to be managed respectively by persistent or application domains. Likewise, ontologies may target objects or aspects, the former being associated with structural sub-types, the latter with functional ones. Differences between information models and knowledge representation appear with rules and constraints. While the objective of information and control systems is to manage business objects and activities, the purpose of knowledge systems is to manage symbolic contents independently of their actual counterparts (principle #3). Standard rules used in system modelling describe allowed operations on objects, activities and associated information; they can be expressed forward or backward:

  • Forward (aka push) rules are conditions on when and how operations are to be performed.
  • Backward (aka pull) rules are constraints on the consistency of symbolic representations or on the execution of operations.
PtrnRules_typok
Standard Rules

Assuming a continuity between information and knowledge representations, the inflection point would be marked by the introduction of modalities used to qualified truth values,  e.g according temporal and fuzzy logic:

  • Temporal extensions will put time stamps on truth values of information.
  • Fuzzy logic put confidence levels on truth values of information.

That is where knowledge systems depart from information and control ones as they introduce a new theory of intelligent reasoning, one based upon the fluidity and volatility of knowledge.

Meanings are in the Hands of Beholders

Seen in a corporate context, knowledge can be understood as information framed by contexts and driven by purposes: how to run a business, how to develop applications, how to manage systems. Hence the dual perspective: on one hand information is governed by enterprise concerns, systems functionalities, and platforms technology; on the other hand knowledge is driven by business processes, systems engineering, and services management.

Knowa_Archis
Knowledge of Architectures, Architecture of Knowledge.

That provides a clear and comprehensive taxonomy of artifacts, to be used to build knowledge from lower layers of information and data:

  • Business analysts have to know about business domains and activities, organization and applications, and quality of service.
  • System engineers have to know about projects, systems functionalities and platform implementations.
  • System managers have to know about locations and operations, services, and platform deployments.

The dual perspective also points to the dynamics of knowledge, with information being pushed by the their sources, and knowledge being pulled by their users.

A Time for Every Purpose

As understood by Cybernetics, enterprises are viable systems whose success depends on their capacity to countermand entropy, i.e the progressive downgrading of the information used to govern interactions both within the organization itself and with its environment. Compared to architecture knowledge, which is organized according to information contents, knowledge architecture is organized according to  functional concerns and information lifespan, and its objective is to keep internal and external information in synch:

  • Planning of business objectives and requirements (internal) relative to markets evolutions and opportunities (external).
  • Assessment of organizational units and procedures (internal) in line with regulatory and contractual environments (external).
  • Monitoring of operations and projects (internal) together with sales and supply chains (external).
Knowa_DecisLevels2
Knowledge Architecture and Shearing Layers: strategy at leisure, time for plans, real-time operations.

That put meanings (that would be knowledge) in the hands of decision makers, respectively for corporate strategy, organization, and operations. Moreover, enterprises being living entities, lifespan and functional sustainability are meant to coalesce into consistent and homogenous layers:

  • Enterprise (aka business, aka strategic) time-scales are defined by environments, objectives, and investment decisions.
  • Organization (aka functional) time-scales are set by availability, versatility, and adaptability of resources
  • Operational time-scales are determined by process features and constraints.

Such a congruence of time-scales, architectures and purposes into Shearing Layers is arguably a key success factor of Knowledge management.

Search and Stretch

As already noted, knowledge is driven by purposes, and purposes, not being confined to domains or preserves, are bound to stretch knowledge across business contexts and organizational boundaries. That can be achieved through search, logic, and classification.

  • Searches collect the information relevant to users concerns (1). That may satisfy all the knowledge needs, or provide a backbone for further extension.
  • Searches can be combined with ontologies (aka classifications) that put the same information under new lights (1b).
  • Truth-preserving operations using mathematics or formal languages can be applied to produce derived information (2).
  • Finally, new information with reduced confidence levels can be produced through statistical processing (3,4).

For instance, observed traffic at toll roads (1) is used for accounting purposes (2), to forecast traffic evolution (3), to analyze seasonal trends (1b) and simulate seasonal and variable tolls (4).

Knowa_Stretch
Observed facts (1), deductions (2), projections (3), transposition (1b) and hypothesis (4).

Those operations entail clear consequences for knowledge management: As far as computational distances don’t affect confidence levels, truth-preserving operations are neutral with regard to KM. Classifications are symbolic tools designed on purpose; as a consequence all knowledge associated to a classification should remain under the responsibility of its designer. Challenges arise when confidence levels are affected, either directly or through obsolescence. And since decision-making is essentially about risks management, dealing with partial or unreliable information cannot be avoided. Hence the importance of managing knowledge along shearing layers, each with its own information life-cycle, confidence requirements, and decision-making rules.

From Knowledge Architecture to Architecture Capability

Knowledge architecture is the corporate central nervous system, and as such it plays a primary role in the support of operational and managerial processes. That point is partially addressed by Frameworks like Zachman whose matrix organizes Information System Architecture (ISA) along capabilities and design levels. Yet, as illustrated by the design levels, the focus remains on information technology without explicitly addressing the distinction between enterprise, systems, and platforms.

Capabilities can be defined across architecture layers with regard to business, engineering, and operational processes

That distinction is pivotal because it governs the distinction between corresponding processes, namely business processes, systems engineering, and services managements. And once the distinction is properly established knowledge architecture can be aligned with processes assessment.

Yet that will not be enough now that digital environments are invading enterprise systems, blurring the distinction between managed information assets and the continuous flows of big data.

DataMining_Capabs
The outer range anchors enterprise architecture and objectives to business environments

That puts the focus on two structural flaws of enterprise architectures:

  • The confusion between data, information, and knowledge.
  • The intrinsic discrepancy between systems and knowledge architectures.

Both can be overcame by merging system and knowledge architectures applying the Pagoda blueprint:

Pagoda Architecture Blueprint

The alignment of platforms, systems functionalities, and enterprise organization respectively with data (environments), information (symbolic representations), and knowledge (business intelligence) would greatly enhance the traceability of transformations induced by the immersion of enterprises in digital environments.

Knowledge Representation & Profiled Ontologies

Faced with digital business environments, enterprise must sort relevant and accurate information out of continuous and massive inflows of data. As modeling methods cannot cope with the open range of contexts, concerns, semantics, and formats, looser schemes are needed, that’s precisely what ontologies are meant to do:

  • Thesaurus: ontologies covering terms and concepts.
  • Documents: ontologies covering documents with regard to topics.
  • Business: ontologies of relevant enterprise organization and business objects and activities.
  • Engineering: symbolic representation of organization and business objects and activities.

Ontologies: Purposes & Targets

Profiled ontologies can then be designed by combining that taxonomy of concerns with contexts, e.g:

  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to accords.
  • Corporate: Defined by enterprises, changes subject to internal decision-making.
  • Social: Defined by usage, volatile, continuous and informal changes.
  • Personal: Customary, defined by named individuals (e.g research paper).

Last but not least, external (regulatory, businesses, …) and internal (i.e enterprise architecture) ontologies could be integrated, for instance with the Zachman framework:

Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).

Using profiled ontologies to manage enterprise architecture and corporate knowledge  will help to align knowledge management with EA governance by setting apart ontologies defined externally (e.g regulations), from the ones set through decision-making, strategic (e.g plate-form) or tactical (e.g partnerships).

An ontological kernel has been developed as a Proof of Concept using Protégé/OWL 2; a beta version is available for comments on the Stanford/Protégé portal with the link: Caminao Ontological Kernel (CaKe).

From Data Analysis to Deep Learning

Set between all-inclusive onslaught of data on one side, pervasive smart bots on the other side, information systems could lose their identity and purpose. And there is a good reason for that, namely the confusion between data, information, and knowledge.

Knowledge is the ability to make differences

As it happened aeons ago, ontologies have been explicitly though up to deal with that issue.

Further Reading

External Links

The Economics of Reuse

Objectives

Reusing artifacts means using them in contexts that are different of their native ones. That may come by design, when specifications can anticipate on shared concerns, or as an afterthought, when initially unexpected similarities are identified later on.

Economics of Reuse (Sergio Silva)
Economics of Reuse (Sergio Silva)

Planned or opportunistic, reuse brings benefits in terms of costs, quality, and continuity:

  • Cost benefits are most easily achieved for component engineering but may also be obtained upstream with model reuse and patterns.
  • Quality benefits are first and foremost rooted at model level, especially when components implementation is supported by automated tools.
  • Continuity benefits are to be found both along the business (semantics and business rules) and engineering (functional architecture and platform implementations) perspectives.

Reuse policies may also bring positive externalities by inducing a comprehensive approach to software design.

Yet, those policies will usually entail costs and may as well bear negative externalities:

  • Artifacts designed for reuse are usually most costly to develop, even if part of additional costs should be ascribed to quality management.
  • Excessive enforcement policies may significantly hamper projects ability to meet business needs in time.
  • Managing reusable assets usually induces overheads.

In order to assess those policies, economics of reuse must be set across business, engineering or architecture perspectives:

  • Business perspective: how to factor out and reuse artifacts associated with the knowledge of business domains when system functionalities or platforms are modified.
  • Engineering perspective: how to reuse development artifacts when business contents or system platforms are modified.
  • Architecture perspective: how to use system components independently of changes affecting business contents or development artifacts.

That can be achieved by managing development assets along model driven architecture: CIMs for business and enterprise architecture, PIMs for systems functional architecture, and PSMs for systems technical architecture.

Contexts & Concerns

Whatever their inception, reused artifacts are meant to be used independently of their native context and concerns: opportunistic reuse will map a specific purpose to another one, planned reuse will map a shared concern to a specific purpose. As a corollary, reuse policies must be supported by some traceability mechanism linking concerns and purposes across contexts and architectures.

From the enterprise perspective, the problem is to reuse the knowledge of business domains and processes. For that purpose different mechanisms can be considered:

  • The simplest solution is to reuse generic components, with the business knowledge directly transferred through parameters.
  • Similarly, business rules can be separately edited and managed in business contexts before being executed by system components.
  • One step further, business semantics and rules can be fenced off with domains and applied to different objects and applications.
  • Finally, models of business objects and processes can be capitalized and managed as reusable assets.

Reuse Policies with Model Driven Architecture

Once business knowledge is duly capitalized as functional assets, they can be reused along the engineering perspective:

  • System functionalities: functional patterns are (re)used to map functional requirements to functional architecture, and services are (re)used to support business processes.
  • Software architecture: Object and aspect oriented designs, using inheritance and polymorphism.
  • Software implementations: Component-based development and information hiding.

Along the architecture perspective information hiding is generalized to systems, and reuse is masked by the definition of services.

It must be noted that, contrary to misleading similarities, refactoring is the opposite of reuse: instead of building from well understood and safe artifacts, it tries to extract some reusable chunks from opaque and unsafe components.

Knowledge Reuse

Enterprise and business knowledge may affect the full scope of system functionalities: boundaries (e.g users authorizations), controls (e.g accounting rules), and persistency (e.g consistency constraints). Whereas there isn’t much to argue about the benefits of reusing enterprise and business knowledge, costs may significantly diverge depending on the way corresponding assets are managed:

  • Domain specific knowledge are rooted at requirements level. That’s typically achieved when use cases are introduced to describe how systems are meant to support business processes. With different use cases targeting different aspects of the same business objects and processes, overlaps must be identified and factored out in order to be reused across processes.
  • Business knowledge may also be global, i.e shared at enterprise architecture level, defining objectives, assets and organization associated with the continuity of corporate identity and business capabilities within a regulatory and market environment.

Use cases access to respectively shared (a,b) and specific (c,d) knowledge for objects and application domains

In any case, the challenge is to map business knowledge to system models, more precisely to embed reused descriptions of business objects and process to corresponding development artifacts. At architecture level the mapping should target objects or processes identities, at domain level the focus will be on aspects and views.

As epitomized by service oriented architectures, business architectures can be mapped to system ones through delegation, either directly (business processes calling on services), or indirectly (collaboration between services). That will establish a clear distinction between shared (aka global) and domain specific knowledge, and consequently between respective economics of reuse.

  • Given that shared knowledge must be reused across domains and applications, there can be no argument about benefits. That will be achieved by a messaging model built on a need-to-know basis. And since such model is an intrinsic feature of the functional architecture, it incurs no additional overheads.
  • Specific knowledge for its part is managed at domain level and therefore masked behind services interfaces. Whatever reuse occurs there remains local and an intrinsic part of domain design.

Knowledge Reuse through services (a), boundaries (b), controls (c), and entities (d).

Things are not so clear when business knowledge is not managed by services but distributed across domains, mixing specific and global knowledge. Managing reusable assets would be easy were the distinction between business and functional requirements to coincide with the one between shared and specific knowledge; unfortunately that’s seldom the case, and requirements, functional or business, will have to be sorted out at architecture or design level.

Whereas Service Oriented Architectures (SOA) put functionalities in the driving seat, Enterprise Application Integration (EAI) gives the lead to applications for which it provides adapters. As maintenance of integration adapters is a very poor substitute for knowledge management, reuse is mostly limited to legacy applications.

Maintaining adapters across layers of applications induces significant overheads

At design level knowledge is weaved into canonical data models (entities), functional architectures (controls), and user interfaces frameworks (boundaries).

Mixed concerns: business requirements can be specific, functional ones can be global.

On one hand, tracing reuse to requirements may be problematic as they are by nature concrete and unstructured, hence not the best support for generalization or the factoring out of shared features. Assuming business analysts can nonetheless separate reusable wheat from specific chaff, knowledge management at this level will require a dedicated framework supporting comprehensive and differentiated traceability. Additional overheads will have to be taken into account and compared to potential benefits.

On the other hand, canonical data models and functional architectures are meant to provide unified views of shared objects and semantic domains. Yet, canonical models are by nature unwieldy as they carve structures, features, and connections of business objects, without clear mechanisms to combine shared and specific knowledge. As a corollary, their use may reduce flexibility, and their management usually induces significant overheads.

Artifacts Reuse

With enterprise and business knowledge capitalized as development assets, the engineering case for reuse may appears indisputable, but business cases are often much more controversial due to large overheads and fleeting returns. Taking cues from Barry Boehm (“Managing Software Productivity and Reuse”),  here are some of the main pitfalls of artifacts reuse:

  • Repository delusion: knowledge being by nature contextual, its reuse is driven by circumstances and purposes; as a consequence the availability of large repositories of development assets will probably be ignored without clear pointers rooted in contexts and concerns.
  • Confusion between components (or structures) and functionalities (or interfaces): under the influence of the object oriented paradigm, the distinction between objects and aspects is all too often forgotten. That’s unfortunate as this difference is congruent with the one between business objects on one hand, business operations on the other hand.
  • Over generalization: reuse is usually achieved by factoring out useful aspects or factoring off useless ones. In both cases the temptation is to repeat the operation until nothing could be added to the scope. Such “flight for abstraction” will inevitably overtake the proper level of reuse and begets models void of any anchor to business relevancy.
  • Scalability: while reuse is about separation of concerns and complexity management, those two criteria don’t have to pull in the same direction. When they don’t, variants will be dispersed across artifacts and their processing will suffer a combinatorial explosion if the system has to be scaled up.
  • Obsolescence: shelf lives of development assets can be defined by each or both business or technical relevancy. Assuming spans either coincide or are managed independently, they should be explicitly taken into account before any reuse.

Those obstacles can be managed providing that models:

Sorting out reuse concerns with differentiated inheritance.

Economics of Reuse and Sustainable Development

Sustainable system development is the ability to meet present business requirements while enhancing system capability to support future ones. Clearly reuse is not the only factor of sustainability, with architectures, returns, and risk management being pivotal. But the economics of reuse encompass most of other factors.

Architectures are clearly first to be considered, as epitomized by MDA:

  • Reuse of development assets rooted in enterprise architecture is not an option: system functionalities are meant to support business processes as they are (a).
  • At the other end of the development process, reuse of software designs and components across technical architectures should bring benefits in quality and costs (c).
  • In between reuse of system functionalities is necessary to guarantee the robustness and continuity of functional architectures; it should also leverage the benefits of reuse of enterprise and development assets (b).

Reuse of models is at the core of the MDA framework
Reuse of models is at the core of the MDA framework

Regarding returns, reuse through generic components, rules engines, or semantic domains can be directly supported by development tools, bypassing explicit models of functional architectures. That makes their costs/benefits analysis both simpler and well circumscribed. That is not the case for system functionalities which stand at the hub of perspectives. As a consequence, costs/benefits should be analyzed as a whole:

  • Regarding business assets, a clear distinction must be maintained between specific and shared knowledge, reuse being considered for the latter only.
  • Regarding the reuse of business assets as functional ones, services clearly offer the best returns. Otherwise costs/benefits are to be assessed, from reuse of vocabulary and semantics domains (straightforward, limited overheads), to canonical models and enterprise application integration (contingent, significant overheads).
  • Economics of reuse will ultimately be set by traceability mechanisms linking enterprise and business knowledge on one hand, components designs on the other hand. Even for services (c), if at a lesser degree, the business case for reuse will be decided by leveraged benefits and non cumulative costs. Hence the importance of maintaining the distinction between identified structures and associated aspects from business (a) and functional (b) requirements, to components interfaces (d) and structures (e).

Maintaining the distinction between structures and aspects from business (a) and functional (b) requirements, to components interfaces (d) and structures (e).

Finally, reuse may also play a significant role in risk management, especially when risks are managed according to their source:

  • Changes in business contexts can usually occurs along two frequency waves: short, for market opportunities, and long, for an organization’s continuity. Associated risks could be better managed if corresponding knowledge were managed accordingly.
  • System architectures are meant to evolve in synch with organization continuity; were technological environment or corporate structures subject to unexpected changes, reusable functional assets would be of great help.
  • Given that enterprise IT can no longer be self-contained and operate in isolation, reusable designs may provide buffers to technological risks and help exploiting unexpected business opportunities.

Further Reading

Models, Architectures, Perspectives (MAPs)

What You See Is Not What You Get

Models are representations and as such they are necessarily set in perspective and marked out by concerns.

Model, Perspective, Concern (R. Doisneau).
  • Depending on perspective, models will encompass whole contexts (symbolic, mechanic, and human components), information systems (functional components), software (components implementation).
  • Depending on concerns models will take into account responsibilities (enterprise architecture), functionalities (functional architecture), and operations (technical architecture).

While it may be a sensible aim, perspectives and concerns are not necessarily congruent as responsibilities or functionalities may cross perspectives (e.g support units), and perspectives may mix concerns (e.g legacies and migrations). That conundrum may be resolved by a clear distinction between descriptive and prescriptive models, the former dealing with the problem at hand, the latter with the corresponding solutions, respectively for business, system functionalities, and system implementation.

Models as Knowledge

Assuming that systems are built to manage symbolic representations of business domains and operations, models are best understood as knowledge, as defined by the pivotal article of Davis, Shrobe, and Szolovits:

  1. Surrogate: models provide the description of symbolic objects standing as counterparts of managed business objects and activities.
  2. Ontological commitments: models include statements about the categories of things that may exist in the domain under consideration.
  3. Fragmentary theory of intelligent reasoning: models include statements of what the things can do or can be done with.
  4. Medium for efficient computation: making models understandable by computers is a necessary step for any learning curve.
  5. Medium for human expression: models are meant to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.

Representation_Mutilation
Surrogates without Ontological Commitment

What You Think Is What You Get

Whereas conventional engineering has to deal with physical artifacts, software engineering has only symbolic ones to consider. As a consequence, design models can be processed into products without any physical impediments: “What You Think Is What You Get.”

RR_MYW
Products and Usage are two different things

Yet even well designed products are not necessarily used as expected, especially if organizational and business contexts have changed since requirements capture.

Models and Architectures

Models are partial or complete descriptions of existing or intended systems. Given that systems will eventually be implemented by software components, models and programs may overlap or even be congruent in case of systems made exclusively of software components. Moreover, legacy systems are likely to get along together with models and software components. Such cohabitation calls for some common roof, supported by shared architectures:

  • Enterprise architecture deals with the continuity of business concerns.
  • System architecture deals with the continuity of systems functionalities.
  • Technical architecture  deals with the continuity of systems implementations.

That distinction can also be applied to engineering problems and solutions: business (>enterprise), organization (supporting systems), and development (implementations).

ADSM_PbsSolsArch
Problems and solutions must be set along architecture layers

On that basis the aim of analysis is to define the relationship between business processes and supporting systems, and the aim of design is to do the same between system functionalities and components implementation.

Whatever the terminology (layers or levels), what is at stake is the alignment of two basic scales:

  • Architectures: enterprise (concepts), systems (functionalities), and platforms (technologies).
  • Models: conceptual (business context and organization), analysis (symbolic representations), design (physical implementation).
Dial M for Models

If systems could be developed along a “fire and forget” procedure models would be used only once. Since that’s not usually the case bridges between business contexts and supporting systems cannot be burned; models must be built and maintained both for business and system architectures, and the semantics of modeling languages defined accordingly.

Languages, Concerns, Perspectives

Apart for trivial or standalone applications, engineering processes will involve several parties whose collaboration along time will call for sound languages. Programming languages are meant to be executed by symbolic devices, business languages (e.g B.P.M.) are meant to describe business processes, and modeling languages (e.g UML) stand somewhere in-between.

As far as system engineering is concerned, modeling languages have two main purposes: (1) describe what is expected from the system under consideration, and (2) specify how it should be built. Clearly, the former belongs to the business perspective and must be expressed with its specific words, while the latter can use some “unified” language common to system designers.

The Unified Modeling Language (UML) is the outcome of the collaboration between James Rumbaugh with his Object-modeling technique (OMT), Grady Booch, with his eponymous method, and Ivar Jacobson, creator of the object-oriented software engineering (OOSE) method.

Whereas UML has been accepted as the primary standard since 1995, it’s scope remains limited and its use shallow. Moreover, when UML is effectively used, it is often for the implementation of Domain Specific Languages based upon its stereotype and profile extensions. Given the broadly recognized merits of core UML constructs, and the lack of alternative solutions, such a scant diffusion cannot be fully or even readily explained by subordinate factors. A more likely pivotal factor may be the way UML is used, in particular in the confusion between perspectives and concerns.

Perspectives and Concerns: business, functionalities, implementation

Languages are useless without pragmatics which, for modeling ones means some methodology defining what is to be modeled, how, by who, and when. Like pragmatics, methods are diverse, each bringing its own priorities and background, be it modeling concepts (e.g OOA/D), procedures (e.g RUP), or collaboration agile principles (e.g Scrum). As it happens, none deals explicitly with the pivotal challenges of the modeling process, namely: perspective (what is modeled), and concern (whose purpose).

In order to meet those challenges the objective of the Caminao framework is to provide compass and signposts for road-maps using stereotyped UML constructs.

Models, Architectures, Perspectives (MAPs)

From a general perspective, and beyond lexical controversies, models and architectures should be defined along two parallel scales:

  • Architectures: enterprise (concepts), systems (functionalities), and platforms (technologies).
  • Models: conceptual (business context and organization), analysis (symbolic representations), design (physical implementation).

Caminao maps add perspectives:

  • Models set the stages, where targeted artifacts are defined depending on concerns.
  • Topography put objects into perspective as set by stakeholders situation: business objectives, system functionalities, system implementation.
  • Concerns and perspectives must be put into context as defined by enterprise, functional or technical architectures.

The aim of those maps is to support project planning and process assessment:

  • Perspective and concerns: what is at stake, who’s in charge.
  • Milestones: are expectations and commitments set across different organizational units.
  • Planning: development flows are defined between milestones and work units set accordingly.
  • Tasks traceability to outcomes and objective functional metrics provide for sound project assessment.
  • Processes can be designed, assessed and improved by matching  development patterns with development strategies.

Matching Concerns and Perspectives

As famously explained by Douglas Hofstadter’s Eternal Golden Braid, models cannot be proven true, only to be consistent or disproved.

Depending on language, internal consistency can be checked through reviews (natural language) or using automated tools (formal languages).

Refutation for its part entails checks on external consistency, in other words matching models and concerns across perspectives. For that purpose modeling stations must target well defined sets of identified objects or phenomena and use clear and non ambiguous semantics. A simplified (yet versatile), modeling cycle could therefore be exemplified as follows:

  1. Identify a milestone  relative to perspective, concern, and architecture.
  2. Select anchors (objects or activities).
  3. Add connectors and features.
  4. Check model for internal consistency.
  5. Check model for external consistency, e.g refutation by counter examples.
  6. Iterate from 2.

Further Reading

External Links