EA: Legacy & Latency

 “For things to remain the same, everything must change”
Lampedusa, “The Leopard”

Preamble

Whatever the understanding of the discipline, most EA schemes implicitly assume that enterprise architectures, like their physical cousins, can be built from blueprints. But they are not because enterprises have no “Pause” and “Reset” buttons: business cannot be put on stand-by and must be carried on while work is in progress.

Refactored Legacy (E. Lusito)

Systems & Enterprises

Systems are variously defined as:

  • “A regularly interacting or interdependent group of items forming a unified whole” (Merriam-Webster).
  • “A set of connected things or devices  that operate  together” (Cambridge Dictionary).
  • “A way of working, organizing, or doing something which follows a fixed plan or set of rules” (Collins Dictionary)
  • “A collection of components organized to accomplish a specific function or set of functions” (TOGAF from ISO/IEC 42010:2007)

While differing in focus, most understandings mention items and rules, purpose, and the ability to interact; none explicitly mention social structures or interactions with humans. That suggests where the line should be drawn between systems and enterprises, and consequently between corresponding architectures.

Architectures & Changes

Enterprises are live social entities made of corporate culture, organization, and supporting systems; their ultimate purpose is to maintain their identity and integrity while interacting with environments. As a corollary, changes cannot be carried out as if architectures were just apparel, but must ensure the continuity and consistency of enterprises’ structures and behaviors.

That cannot be achieved by off-soil schemes made of blueprints and step-by-step processes detached from actual organization, systems, and processes. Instead, enterprise architectures must be grown bottom up from actual legacies whatever their nature: technical, functional, organizational, business, or cultural.

EA’s Legacy

Insofar as enterprise architectures are concerned, legacies are usually taken into account through one of three implicit assumptions:

No legacy assumptions ignore the issue, as if the case of start-ups could be generalized. These assumptions are logically flawed because enterprises without legacy are like embryos growing their own inherent architecture, and in that case there would be no need for architects.

En Bloc legacy assumptions take for granted that architectures as a whole could be replaced through some Big Bang operation without having a significant impact on business activities. These assumptions are empirically deceptive because, even limited to software architectures, Big Bang solutions cannot cope with the functional and generational heterogeneity of software components characterizing large organizations. Not to mention that enterprise architectures are much more that software and IT.

Piecemeal legacies can be seen as the default assumption, based on the belief that architectures can be re-factored or modernized step by step. While that assumption may be empirically valid, it may also miss the point: assuming that all legacies can be dealt with piecemeal rubs out the distinction pointed above between systems and enterprises.

So, the question remains of what is to be changed, and how ?

EA as a Work In Progress

As with leopard’s spots and identity, the first step would be to set apart what is to change (architectures) from what is to carry on (enterprise).

Maps and territories do provide an overview of spots’ arrangement, but they are static views of architectures, whereas enterprises are dynamic entities that rely on architectures to interact with their environment. So, for maps and territories to serve that purpose they should enable continuous updates and adjustments without impairing enterprises’ awareness and ability to compete.

That shift from system architecture to enterprise behavior implies that:

  • The scope of changes cannot be fully defined up-front, if only because the whole enterprise, including its organization and business model, could possibly be of concern.
  • Fixed schedules are to be avoided, lest each and every unit, business or otherwise, would have to be shackled into a web of hopeless reciprocal commitments.
  • Different stakeholders may come as interested parties, some more equal than others, possibly with overlapped prerogatives.

So, instead of procedural and phased approaches supposed to start from blank pages, EA ventures must be carried out iteratively with the planning, monitoring, assessment, and adjustment of changes across enterprises’ businesses, organizations, and systems. That can be represented as an extension of the OODA (Observation, Orientation, Decision, Action) loop:

  • Actual observations from operations (a)
  • Data analysis with regard to architectures as currently documented (b).
  • Changes in business processes (c).
  • Changes in architectures (d).
DataInfoKnow_OODA
EA decision-making as an extension of the OODA loop

Moreover, due to the generalization of digital flows between enterprises and their environment, decision-making processes used to be set along separate time-frames (operational, tactical, strategic, …), must now be weaved together along a common time-scale encompassing internal (symbolic) as well as external (actual) events.

It ensues that EA processes must not only be continuous, but they also must deal with latency constraints.

Changes & Latency

Architectures are by nature shared across organizational units (enterprise level) and business processes (system level). As a corollary, architecture changes are bound to introduce mismatches and frictions across business-specific applications. Hence the need of sorting out the factors affecting the alignment of maps and territories:

  • Elapsed time between changes in territories and maps updates (a>b) depends on data analytics and operational architecture.
  • Elapsed time between changes in maps and revised objectives (b>c) depends on business analysis and organization.
  • Elapsed time between changes in objectives and their implementation (c>d) depends on engineering processes and systems architecture.
  • Elapsed time between changes in systems and changes in territories (d>a) depends on applications deployment and technical architectures.

Latency constraints can then be associated with systems engineering tasks and workshops.

DataInfoKnow_Worshops
EA changes & Latency

On that basis it’s possible to define four critical lags:

  • Operational: data analytics can be impeded by delayed, partial, or inaccurate feedback from processes.
  • Mapping: business analysis can be impeded by delays or discrepancies in data analytics.
  • Engineering: development of applications can be impeded by delays or discrepancies in business analysis.
  • Processes: deployment of business processes can be impeded by delays in the delivery of supporting applications.

These lags condition the whole of EA undertakings because legacy structures, mechanisms, and organizations are to be continuously morphed into architectures without introducing misrepresentations that would shackle activities and stray decision-making.

EA Latency & Augmented Reality

Insofar as architectural changes are concerned, discrepancies and frictions are rooted in latency, i.e the elapsed time between actual changes in territories and the updating of relevant maps.

As noted above, these lags have to be weighted according to time-frames, from operational days to strategic years, so that the different agents could be presented with the relevant and up-to-date views befitting to each context and concerns.

DataInfoKnow_intervs
EA views must be set according to contexts and concerns, with relevant lags weighted appropriately.

That could be achieved if enterprises architectures were presented through augmented reality technologies.

Compared to virtual reality (VR) which overlooks the whole issue of reality and operates only on similes and avatars, augmented reality (AR) brings together virtual and physical realms, operating on apparatuses that weaves actual substrates, observations, and interventions with made-up descriptive, predictive, or prescriptive layers.

On that basis, users would be presented with actual territories (EA legacy) augmented with maps and prospective territories.

DataInfoKnow_esh3
Augmented EA: Actual territory (left), Map (center), Prospective territory (right)

Composition and dynamics of maps and territories (actual and prospective) could be set and edited appropriately, subject to latency constraints.

Further Reading

 

Collaborative Systems Engineering: From Models to Ontologies

Given the digitization of enterprises environments, engineering processes have to be entwined with business ones while kept in sync with enterprise architectures. That calls for new threads of collaboration taking into account the integration of business and engineering processes as well as the extension to business environments.

Wang-Qingsong_scaffold
Collaboration can be personal and direct, or collective and mediated (Wang Qingsong)

Whereas models are meant to support communication, traditional approaches are already straining when used beyond software generation, that is collaboration between humans and CASE tools. Ontologies, which can be seen as a higher form of models, could enable a qualitative leap for systems collaborative engineering at enterprise level.

Systems Engineering: Contexts & Concerns

To begin with contents, collaborations should be defined along three axes:

  1. Requirements: business objectives, enterprise organization, and processes, with regard to systems functionalities.
  2. Feasibility: business requirements with regard to architectures capabilities.
  3. Architectures: supporting functionalities with regard to architecture capabilities.
RekReuse_BFCo
Engineering Collaborations at Enterprise Level

Since these axes are usually governed by different organizational structures and set along different time-frames, collaborations must be supported by documentation, especially models.

Shared Models

In order to support collaborations across organizational units and time-frames, models have to bring together perspectives which are by nature orthogonal:

  • Contexts, concerns, and languages: business vs engineering.
  • Time-frames and life-cycle: business opportunities vs architecture stability.
EASquare2_eam.jpg
Harnessing MBSE to EA

That could be achieved if engineering models could be harnessed to enterprise ones for contexts and concerns. That is to be achieved through the integration of processes.

 Processes Integration

As already noted, the integration of business and engineering processes is becoming a key success factor.

Processes integration

For that purpose collaborations would have to take into account the different time-frames governing changes in business processes (driven by business value) and engineering ones (governed by assets life-cycles):

  • Business requirements engineering is synchronic: changes must be kept in line with architectures capabilities (full line).
  • Software engineering is diachronic: developments can be carried out along their own time-frame (dashed line).
EASq2_wrkflw
Synchronic (full) vs diachronic (dashed) processes.

Application-driven projects usually focus on users’ value and just-in-time delivery; that can be best achieved with personal collaboration within teams. Architecture-driven projects usually affect assets and non-functional features and therefore collaboration between organizational units.

Collaboration: Direct or Mediated

Collaboration can be achieved directly or through some mediation, the former being a default option for applications, the latter a necessary one for architectures.

Cycles_collabs00

Both can be defined according to basic cognitive and organizational mechanisms and supported by a mix of physical and virtual spaces to be dynamically redefined depending on activities, projects, locations, and organisation.

Direct collaborations are carried out between individuals with or without documentation:

  • Immediate and personal: direct collaboration between 5 to 15 participants with shared objectives and responsibilities. That would correspond to agile project teams (a).
  • Delayed and personal: direct collaboration across teams with shared knowledge but with different objectives and responsibilities. That would tally with social networks circles (c).
Cycles_collabs.jpg
Collaborations

Mediated collaborations are carried out between organizational units through unspecified individual members, hence the need of documentation, models or otherwise:

  • Direct and Code generation from platform or domain specific models (b).
  • Model transformation across architecture layers and business domains (d)

Depending on scope and mediation, three basic types of collaboration can be defined for applications, architecture, and business intelligence projects.

EASq2_collabs
Projects & Collaborations

As it happens, collaboration archetypes can be associated with these profiles.

Collaboration Mechanisms

Agile development model (under various guises) is the option of choice whenever shared ownership and continuous delivery are possible. Application projects can so be carried out autonomously, with collaborations circumscribed to team members and relying on the backlog mechanism.

The OODA (Observation, Orientation, Decision, Action) loop (and avatars) can epitomize projects combining operations, data analytics, and decision-making.

EASquare2_collaMechas
Collaboration archetypes

Projects set across enterprise architectures cannot be carried out without taking into account phasing constraints. While ill-fated Waterfall methods have demonstrated the pitfalls of procedural solutions, phasing constraints can be dealt with a roundabout mechanism combining iterative and declarative schemes.

Engineering vs Business Driven Collaborations

With collaborative engineering upgraded at enterprise level, the main challenge is to iron out frictions between application and architecture projects and ensure the continuity, consistency and effectiveness of enterprise activities. That can be achieved with roundabouts used as a collaboration mechanism between projects, whatever their nature:

  • Shared models are managed at roundabout level.
  • Phasing dependencies are set in terms of assertions on shared models.
  • Depending on constraints projects are carried out directly (1,3) or enter roundabouts (2), with exits conditioned by the availability of models.
Engineering driven collaboration: roundabout and backlogs

Moreover, with engineering embedded in business processes, collaborations must also bring together operational analytics, decision-making, and business intelligence. Here again, shared models are to play a critical role:

  • Enterprise descriptive and prescriptive models for information maps and objectives
  • Environment predictive models for data and business understanding.
OKBI_BIDM
Business driven collaboration: operations and business intelligence

Whereas both engineering and business driven collaborations depend on sharing information  and knowledge, the latter have to deal with open and heterogeneous semantics. As a consequence, collaborations must be supported by shared representations and proficient communication languages.

Ontologies & Representations

Ontologies are best understood as models’ backbones, to be fleshed out or detailed according to context and objectives, e.g:

  • Thesaurus, with a focus on terms and documents.
  • Systems modeling,  with a focus on integration, e.g Zachman Framework.
  • Classifications, with a focus on range, e.g Dewey Decimal System.
  • Meta-models, with a focus on model based engineering, e.g models transformation.
  • Conceptual models, with a focus on understanding, e.g legislation.
  • Knowledge management, with a focus on reasoning, e.g semantic web.

As such they can provide the pillars supporting the representation of the whole range of enterprise concerns:

KM_OntosCapabs

Taking a leaf from Zachman’s matrix, ontologies can also be used to differentiate concerns with regard to architecture layers: enterprise, systems, platforms.

Last but not least, ontologies can be profiled with regard to the nature of external contexts, e.g:

  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to established procedures.
  • Corporate: Defined by enterprises, changes subject to internal decision-making.
  • Social: Defined by usage, volatile, continuous and informal changes.
  • Personal: Customary, defined by named individuals (e.g research paper).
Cross profiles: capabilities, enterprise architectures, and contexts.

Ontologies & Communication

If collaborations have to cover engineering as well as business descriptions, communication channels and interfaces will have to combine the homogeneous and well-defined syntax and semantics of the former with the heterogeneous and ambiguous ones of the latter.

With ontologies represented as RDF (Resource Description Framework) graphs, the first step would be to sort out truth-preserving syntax (applied independently of domains) from domain specific semantics.

KM_CaseRaw
RDF graphs (top) support formal (bottom left) and domain specific (bottom right) semantics.

On that basis it would be possible to separate representation syntax from contents semantics, and to design communication channels and interfaces accordingly.

That would greatly facilitate collaborations across externally defined ontologies as well as their mapping to enterprise architecture models.

Conclusion

To summarize, the benefits of ontological frames for collaborative engineering can be articulated around four points:

  1. A clear-cut distinction between representation semantics and truth-preserving syntax.
  2. A common functional architecture for all users interfaces, humans or otherwise.
  3. Modular functionalities for specific semantics on one hand, generic truth-preserving and cognitive operations on the other hand.
  4. Profiled ontologies according to concerns and contexts.
KM_OntosCollabs
Clear-cut distinction (1), unified interfaces architecture (2), functional alignment (3), crossed profiles (4).

A critical fifth benefit could be added with regard to business intelligence: combined with deep learning capabilities, ontologies would extend the scope of collaboration to explicit as well as implicit knowledge, the former already framed by languages, the latter still open to interpretation and discovery.

P.S.

Knowledge graphs, which have become a key component of knowlege management, are best understood as a reincarnation of ontologies.

Further Reading

 

Focus: Requirements Reuse

Preamble

Requirements is what to feed engineering processes. As such they are to be presented under a wide range of forms, and nothing should be assumed upfront about forms or semantics.

 

What is to be reused: Sketches or Models  ? (John Devlin)

Answering the question of reuse therefore depends on what is to be reused, and for what purpose.

Documentation vs Reuse

Until some analysis can be carried out, requirements are best seen as documents;  whether such documents are to be ephemeral or managed would be decided depending on method (agile or phased), contents (business, supporting systems, implementation, or quality of services), or purpose (e.g governance, regulations, etc).

What is to be reused.

Setting apart external conditions, requirements documentation could be justified by:

  • Traceability of decision-making linking initial requests with actual implementation.
  • Acceptance.
  • Maintenance of deliverables during their life-cycle.

Depending on development approaches, documentation could limited to archives (agile development models) or managed as intermediate products (phased development models). In the latter case reuse would entail some formatting of requirements.

The Cases for Requirements Reuse

Assuming that requirements have been properly formatted, e.g as analysis models (with technical ones managed internally at system level), reuse could be justified by changes in business, functional, or quality of services requirements:

  • Business processes are meant to change with opportunities. With requirements available as analysis models, changes would be more easily managed (a) if they could be fine-grained. Business rules are a clear example, but that could also be the case for new features added to business objects.
  • Functional requirements may change even without change of business ones, e.g if new channels and users are introduced addressing existing business functions. In that case reusable business requirements (b) would dispense with a repeat of business analysis.
  • Finally, quality of service could be affected by operational changes like localization, number of users, volumes, or frequency. Adjusting architecture capabilities would be much easier with functional (c) and business (d) requirements properly documented as analysis models.
Cases for Reuse

Along that perspective, requirements reuse appears to revolve around two pivots, documents and analysis models. Ontologies could be used to bind them.

Requirements & Ontologies

Reusing artifacts means using them in contexts or for purposes different of native ones. That may come by design, when specifications can anticipate on shared concerns, or as an afterthought, when initially unexpected similarities are identified later on. In any case, reuse policies have to overcome a twofold difficulty:

  • Visibility: business and functional analysts must be made aware of potential reuse without having to spend too much time on research.
  • Overheads: ensuring transparency, traceability, and consistency checks on requirements (documents or analysis models) cannot be achieved without costs.

Ontologies could help to achieve greater visibility with acceptable overheads by framing requirements with regard to nature (documents or models) and context:

With regard to nature, the critical distinction is between document management and model based engineering systems. When framed as ontologies, the former is to be implemented as thesaurus targeting terms and documents, the latter as ontologies targeting categories specific to organizations and business domains.

Documents, models, and capabilities should be managed separately

With regard to context the objective should be to manage reusable requirements depending on the kind of jurisdiction and stability of categories, e.g:

  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to accord.
  • Corporate: Defined by enterprises, changes subject to internal decision-making.
  • Social: Defined by usage, volatile, continuous and informal changes.
  • Personal: Customary, defined by named individuals (e.g research paper).
Combining contexts of reuse with architectures layers (enterprise, systems, platforms) and capabilities (Who,What,How, Where, When).

Combined with artificial intelligence, ontology archetypes could crucially extend the benefits of requirements reuse, notably through the impact of deep learning for visibility.

On a broader perspective requirements should be seen as a source of knowledge, and their reuse managed accordingly.

Further Reading

Focus: Business Processes & Abstraction

Preamble

Abstractions, and corollary inheritance, are primarily understood with objects. Yet, since business processes are meant to focus on activities, semantics may have to be refined when abstraction and inheritance are directly used for behaviors.

enrique_gimenez-velilla
How to apply abstraction to processes ?  (E. Gimenez Velilla)

Considering that the primary purpose of abstractions is to tackle business variants with regard to supporting systems, their representation with use cases provides a good starting point.

Business Variants: Use case’s <extend> & <include>

Taking use cases as a modeling nexus between business and systems realms, <extend> and <include> appear as the default candidates for the initial description of behaviors’ specialization and generalization.

  • <include>: to be compared to composition semantics, with the included behaviors performed  by instances identified (#) by the owner UC (a).
  • <extend>: to be compared to aggregation semantics, with the extending behaviors performed  by separate instances with reference to the owner ones (b).
Included UCs are meant to be triggered by owners (a); that cannot be clearly established for abstract use cases and generalization (c).
Included UCs are meant to be triggered by owners (a); that cannot be clearly established for abstract use cases and generalization (c).

Abstract use cases and generalization have also been mentioned by UML before being curiously overlooked in following versions. Since none has been explicitly discarded, some confusion remains about hypothetical semantics. Notionally, abstract UCs would represent behaviors never to be performed on their own (c). Compared to inclusion, used for variants of operations along execution paths, abstract use cases would describe the generic mechanisms to be applied to triggering events at UC inception independently of actual business operations carried out along execution paths.

Nonetheless, and more importantly, the mix-up surrounding the generalization of use cases points to a critical fault-line running under UML concepts: since both use cases and classes are defined as qualifiers, they are supposed to be similarly subject to generalization and specialization. That is misguided because use cases describe the business behaviors to be supported by systems, not to be confused with the software components that will do the job. The mapping between the former and the latter is to be set by design, and there is no reason to assume a full and direct correspondence between functional requirements and functional architecture.

Use Cases Distilled

As far as use cases are considered, mapping business behaviors to supporting systems functionalities can be carried out at two levels:

  • Objects: UCs being identified by triggering agents, events, and goals, they are to be matched with corresponding users interfaces and controllers, the former for the description of I/O flows, the latter for the continuity and integrity of interactions.
  • Methods: As it’s safe to assume that use cases are underpinned by shared business functions and system features, a significant part of their operations are to be realized by methods of shared business entities or services.
vv
Setting apart UIs and controllers, no direct mapping should be assumed between use cases and functional qualifiers.

The business variants distilled into objects’ or services’ methods can be generalized and specialized according to OOD principles; and the same principles can be applied to specific users’ interfaces. But since purely behavioral aspects of UCs can neither be distilled into objects’ methods, nor directly translated into controller objects, their abstraction semantics have to be reconsidered.

Inheritance Semantics: Structural vs Functional

As far as software artifacts are concerned, abstraction semantics are set by programming languages, and while they may differ, the object-oriented (OO) paradigm provides some good enough consolidation. Along that perspective, inheritance emerges as a critical issue due to its direct impact on the validity of programs.

Generally speaking, inheritance describes how structural or behavioral traits are passed from ancestors to descendants, either at individual or type level. OO design is more specific and puts the focus on the intrinsic features (attributes and operations) supported by types or classes, which ensues that behaviors are not considered as such but through the objects’ methods that realize them:

  • Structural inheritance deals with attributes and operations set for the whole life-cycle of instances. As a consequence corresponding inheritance is bound to identities (#) and multiple ascendants (i.e identities) are ruled out.
  • Functional inheritance deal with objects behaviors which may or may not be frozen to whole life-cycles. Features can therefore be inherited from multiple ascendants.

That structural vs functional distinction matches the one between composition and aggregation used to characterize the links between objects and parts which, as noted above, can also be applied to uses cases.

Use Cases & Abstraction

Assuming that the structural/functional distinction defined for objects can also be applied to behaviors, use cases provide a modeling path from variants in business processes to OOD of controllers:

  • Behaviors included by UCs (a) are to be set along the execution paths triggered by UC primary events (#). Inheritance is structural, from UCs base controllers to corresponding (local) ones, and covers features (e.g views on business objects) and associated states (e.g authorizations) defined by use case triggering circumstances.
  • Behaviors extending UCs (b) are triggered by secondary events generated along execution paths. Inheritance is functional, from extending UCs (e.g text messaging) to UCs primary controllers.

Yet this dual scheme may not be fully satisfactory as it suffers from two limitations:

  • It only considers the relationships between UCs, not with the characteristics of the use cases themselves.
  • It ignores the critical difference between the variants of business logic and the variants of triggering conditions.

Both flaws can be patched up if abstract use cases are specifically introduced to factor out triggering circumstances (c):

Use cases provide a principled modeling path from variants in business processes to the OOD of corresponding controllers.
Use cases provide a principled modeling path from variants in business processes to the OOD of corresponding controllers.
  • Undefined triggering circumstances is the only way to characterize abstraction independently of what happens along execution paths.
  • Abstract use cases can then be used to specify inception mechanisms to be inherited by concrete use cases.

That understanding of abstract use cases comes with clear benefits with regard to security and confidentiality.

What is at Stake

Abstraction can significantly reinforce the bridging role of use cases between business and UML models.

On one side specialized use cases can be associated to operations and functions directly implemented, e.g  by factoring out authentication and authorization:

One standard solution is to define a common use case controlling accesses for all users providing they can be identified before being subsequently (i.e during UC execution) qualified and authorized. Apparently, that could be done with <<include>> (a) or <<extend>> (b) connectors.

PtrnUC_abst

But the second option would not be possible with the semantic distinction suggested above for UC patterns, which specifies that use cases can only be extended from existing sessions.

A more generic approach (possibly with patterns) could try to “abstract” Open Session UC, e.g to cover a broader range of actors and identification mechanisms.

Understanding UC abstraction in terms of a partial specification to be <<included>> and run by the current thread will be inconsistent because there would be no concrete actor for the identification mechanisms (c).

By contrast, since inheritance connectors apply to types and not to instances (i.e execution threads), abstracted identification mechanisms are meant to be part of Manage Session and can be applied to triggering actors (d).

Such a clear distinction between the specification of threads (using connectors) and activities (using inheritance) should provide the basis of architecture-based UC patterns.

Al in all, that will greatly help to align business cases, business opportunities, and functional architectures.

Further Reading

 

Models Transformation & Agile Development

Models transformation is generally recognized as the basic mechanism of model based systems engineering (MBSE). Yet, the actual scope of transformations is somewhat limited to design-to-code, and its sequential bias puts MBSE at odds with agile development approaches. Could a revisited understanding help to figure out this apparent discrepancy ?

andreaBurger
Iterative Transformations (Andrea Burger)

Transformation Issues

Traditional transformation paradigm involves ordered sequences of models obtained by applying rules to their immediate predecessor(s). That organizational scheme has three critical consequences, for applicability, economics of reuse, and development processes.

  • Applicability: the effectiveness of transformations is conditioned by (a) an executable language for the description of targets, and (b) a closed and compact set of unambiguous patterns. Those conditions can only be satisfied for the downstream part of the development process.
  • Reuse: given the sequencing constraints, models are to be managed and reused along tree-like structures with duplicates introduced at branching points.
  • Development processes: sequenced models brings forth phased options and leaves out agile solutions.

Assuming those issues are not conclusive, they may be overcame by revisiting the nature of transformations.

Transformation vs Inheritance & Composition

Most of the proposed taxonomies (see references below) put the focus on languages and mechanisms (e.g rules) of sequential transformation without paying enough attention to the nature and the semantics of models contents. Even when abstraction levels are taken into account, the respective contents of each level remain undefined. As it happens, that issue may be the key to a better understanding of models transformation.

To begin with, rule-based transformation has to be compared to inheritance and composition:

  • Structural inheritance can be used to refine models as to take into account business scenarii previously ignored; e.g special conditions for good customers.
  • Functional inheritance can be used to introduce new capabilities; e.g new authentication procedures.
  • Functional composition can be used to apply capabilities across different scenarii; e.g customized authentication procedures.
  • Rules-based solutions can be used by any kind of transformation.
vvvv
A broader understanding of models transformation should include inheritance and composition

That taxonomy implies a clear distinction between operations executed within the same level of abstraction and those targeting artifacts defined at different levels: contrary to rules-based transformations, inheritance and composition can only be applied to artifacts sharing common semantics.

heterogeneous Models

While that would clearly prevent their use for models organized along abstraction levels, semantic pitfalls could be mastered for models built from artifacts from different abstraction levels.

Releasing models from (still to be defined) abstraction levels would bring two critical benefits:

  • Whatever the terminology (abstract, conceptual, functional, etc.), abstraction semantics are much easier to define for artifacts than for models.
  • That would remove a chunk of restrictions on the design of transformation processes.
Heterogeneous models are not bound to abstraction layers.
Releasing models from abstraction layers.

In that case transformation rules could be turned into combination ones and sequential transformation turned into cross-breeding.

Mendel, Models, Mongrels

Taking a cue from Gregor Mendel’s use of cross-fertilization, the aim of a revisited transformation paradigm would be three-fold:

  1. To refine the granularity of reuse, from models to artifacts
  2. To substitute combination for sequential transformation whenever possible.
  3. To substitute graphs for trees, with models organized along two basic layers, final (aka mongrels) or reusable (aka blueprints).
vvv
Models combination (top) replaces transformation phases (bottom) by a distinction between blueprints (full line) and mongrels (dashed line).

As far as MBSE is concerned, the genetics metaphor helps to clarify the nature of abstraction. Conceptually, it introduces a distinction between artifacts and models:

  • With regard to artifacts, abstraction layers are defined by scope: enterprise, systems, platforms.
  • With regard to models, abstraction layers are defined by capabilities: reusable (stable traits), or final (recessive traits).

That taxonomy is corroborated by its functional counterpart: artifacts transformation is carried out with inheritance and composition, models transformation relies on combination.

More important, that understanding goes a long way solving the issues regarding scope, reuse, and development processes.

Scope: Weaving Analysis & Design Traits

Definitions and taxonomies should always be assessed with regard of their applicability. On that account there isn’t much to say for abstraction layers applied to models: they don’t fit because too many traits can be defined across different layers, e.g: business rules, authentication, encryption, etc.

That difficulty can be neatly and consistently removed by models built from artifacts defined at different levels.

Models Reuse: Blueprints vs Mongrels

Reuse is all too often seen as a contentious objective with inconclusive ROI. One one hand it requires significant overheads to manage the resources, on the other hand the outcomes can introduce regressive traits. The distinction between sound reusable models and final ones significantly reduces both the costs of the former and the risks of the latter.

Processes Organization: MBSE & Agile

Model based systems engineering and the agile development model are arguably two of the most conclusive approaches to software engineering. Unfortunately they are often seen as difficult bedfellows, principally (but not uniquely) because the former insists on the importance of models with some bias toward phased processes, while the latter is all for iterative processes with models mentioned as an afterthought, if at all. Yet, both approaches could be made complementary on condition that models could be processed iteratively. And that could be achieved if sequenced transformations of homogeneous models would be replaced by the combination of heterogeneous ones.

vvv
Iterative development mixing new business requirements with existing functionalities (a) and business rules (b).

Within such a framework an agile team could, e.g, iteratively develop new business requirements, taking into account existing functionalities (a) and business rules (b), and generate code (c).

Further readings

External Links

 

2015: A Scraps-Free New Year

As chances are for new years to come with the legacies of past ones, that would be a good time to scrub the scraps.

Sifting through requirements subhodGupta
How to scrub last year scraps (Subhod Gupta)

Legacies as Forced Reuses

As far as enterprise architectures are concerned, sanguine resolutions or fanciful wishes notwithstanding, new years seldom open the door to brand new perspectives. More often than not, they bring new constraints and further curb the possible courses of action by forcing the reuse of existing assets and past solutions. That will call for a review and assessment of the irrelevancies or redundancies in processes and data structures lest they clog the organization and systems, raise entropy, and degrade governance capability.

Architectures as Chosen Reuses

Broadly defined, architectures combine assets (physical or otherwise) and mechanisms (including organizational ones) supporting activities which, by nature, must be adaptable to changing contexts and objectives. As such, the primary purpose of architectures is to provide some continuity to business processes, across locations and between business units on one hand, along time and business cycles on the other hand. And that is mainly to be achieved through reuse of assets and mechanisms.

Balancing Changes & Reuse

It may be argued that the main challenge of enterprise architects is to maintain the right balance between continuity and agility, the former to maintain corporate identity and operational effectiveness, the latter to exploit opportunities and keep an edge ahead of competitors. That may turn to be an oxymoron if architects continuously try to discard or change what was designed to be kept and reused. Yet that pitfall can be avoided if planting architectures and pruning offshoots are carried out independently.

Seasons to Plant & to Prune

Enterprises life can be set along three basic time-scales:

  • Operational: for immediate assessment and decision-making based on full information.
  • Tactical: for periodic assessment and decision-making based on partially reliable information. Periodicity is to be governed by production cycles.
  • Strategic: for planned assessment and decision-making based on unknown or unavailable information. Time-frames are to be governed by business models and risks assessment.

Whereas architecture life-cycles are by nature strategic and meant to span an indefinite, but significant, number of production cycles, trimming redundancies can be carried on periodically providing it doesn’t impact processes execution. So why not doing the house cleaning with the beginning of new years.

Further Reading

Alignment for Dummies

Summary

The emergence of Enterprise Architecture as a discipline of its own has put the light on the necessary distinction between actual (aka business) and software (aka system) realms. Yet, despite a profusion of definitions for layers, tiers, levels, views, and other modeling perspectives, what should be a constitutive premise of system engineering remains largely ignored, namely: business and systems concerns are worlds apart and bridging the gap is the main challenge of architects and analysts, whatever their preserve.

jBaldessari_900x450
(Alignment with Dummies (J. Baldessari)

The consequences of that neglect appear clearly when enterprise architects consider the alignment of systems architectures and capabilities on one hand, with enterprise organization and business processes on the other hand. Looking into the grey zone in between, some approaches will line up models according to their structure, assuming the same semantics on both sides of the divide; others will climb up the abstraction ladder until everything will look alike. Not surprisingly, with the core interrogation (i.e “what is to be aligned ?”) removed from the equation, models will be turned into dummies enabling alignment to be carried out by simple pattern matching.

Models & Views

The abundance of definitions for layers, tiers or levels often masks two different understandings of models:

  • When models are understood as symbolic descriptions of sets of instances, each layer targets a different context with a different concern. That’s the basis of the Model Driven Architecture (MDA) and its distinction between Computation Independent Models (CIMs), Platform Independent Models (PIMs), and Platform Specific Models (PSMs)
  • When models are understood as symbolic descriptions built from different perspectives, all layers targets the same context, each with a different concern. Along that understanding each view is associated to a specific aspect or level of abstraction: processes view, functional view, conceptual view, technical view, etc.

As it happens, many alignment schemes use, implicitly or explicitly, the second understanding without clarifying the underlying assumptions regarding the backbone of artifacts. That neglect is unfortunate because, to be of any significance, views will have to be aligned with regard to those artifacts.

What is to be aligned

From a general perspective, and beyond lexical controversies, alignment has to be managed with regard to two basic scales:

  • Architectures: enterprise (concepts), systems (functionalities), and platforms (technologies).
  • Models: conceptual (business context and organization), analysis (symbolic representations), design (physical implementation).

From a practical point of view, alignment is meant to deal with two main problems: how business processes are supported by systems functionalities, and how those functionalities are to be implemented. Given that the latter can be fully dealt with at system level, the focus can be put on the alignment of business processes and functional architectures.

A naive solution could be to assume services on both processes and systems sides. Yet, the apparent symmetry covers a tautology: while aiming for services oriented architectures on the systems side would be legitimate, if not necessarily realistic, taking for granted that business processes also tally with services would presume some prior alignment, in other words that the problem has already been solved.

The pragmatic and logically correct approach is therefore to map business processes to system functionalities using whatever option is available, models (CIMs vs PIMs), or views (processes vs functions). And that is where the distinction between business and software semantics is critical: assuming the divide can be overlooked, some “shallow” alignment could be carried out directly providing the models can be translated into some generic language; but if the divide is acknowledged a “deep” alignment will have to be supported by a semantics bridge built across.

Shallow Alignment

Just like models are meant to describe sets of instances, meta-models are meant to describe instances of models independently of their respective semantics. Assuming a semantic continuity between business and systems models, meta-models like OMG’s KDM (Knowledge Discovery Meta-model) appear to provide a very practical solution to the alignment problem.

From a practical point of view, one may assume that no model of functional architecture is available because otherwise it would be aligned “by design” and there would be no problem. So something has to be “extracted” from existing software components:

  1. Software (aka design) models are translated into functional architectures.
  2. Models of business processes are made compatible with the generic language used for system models.
  3. Associations are made based on patterns identified on each side.

While the contents of the first and third steps are well defined and understood, that’s not the case for the second step which take for granted the availability of some agreed upon modeling semantics to be applied to both functional architecture and business processes. Unfortunately that assumption is both factually and logically inconsistent:

  • Factually inconsistent: it is denied by the plethora of candidates claiming for the role, often with partial, overlapping, ambiguous, or conflicting semantics.
  • Logically inconsistent: it simply dodges the question (what’s the meaning of alignment between business processes and supporting systems) either by lumping together the semantics of the respective contexts and concerns, or by climbing up the ladder of abstraction until all semantic discrepancies are smoothed out.

Alignments built on that basis are necessarily shallow as they deal with artifacts disregarding of their contents, like dummies in test plans. As a matter of fact the outcome will add nothing to traceability, which may be enough for trivial or standalone processes and applications, but is to be meaningless when applied at architecture level.

Deep Alignment

Compared to the shallow one, deep alignment, instead of assuming a wide but shallow commonwealth, tries to identify the minimal set of architectural concepts needed to describe alignment’s stake. Moreover, and contrary to the meta-modelling approach, the objective is not to find some higher level of abstraction encompassing the whole of models, but more reasonably to isolate the core of architecture concepts and constructs with shared and unambiguous meanings to be used by both business and system analysts.

That approach can be directly set along the MDA framework:

Basicam_languages
Deep alignment makes a distinction between what is at stake at architecture level (blue), from the specifics of process or domain (green), and design (brown).
  • Contexts descriptions (UML, DSL, BPM, etc) are not meant to distinguish between architectural constructs and specific ones.
  • Computation independent models (CIMs) describe business objects and processes combining core architectural constructs (using a generic language like UML), with specific business ones. The former can be mapped to functional architecture, the latter (e.g rules) directly transformed into design artifacts.
  • Platform independent models (PIMs) describe functional architectures using core constructs and framework stereotypes, possibly enriched with specific artifacts managed separately.
  • Platform specific models (PSMs) can be obtained through transformation from PIMs, generated using specific languages, or refactored from legacy code.

Alignment can so focus on enterprise and systems architectural stakes leaving the specific concerns dealt with separately, making the best of existing languages.

Alignment & Traceability

As mentioned above, comparing alignment with traceability may help to better understand its meaning and purpose.

  • Traceability is meant to deal with links between development artifacts from requirements to software components. Its main objective is to manage changes in software architecture and support decision-making with regard to maintenance and evolution.
  • Alignment is meant to deal with enterprise objectives and systems capabilities. Its main objective is to manage changes in enterprise architecture and support decision-making with regard to organization and systems architecture.

cccc

As a concluding remark, reducing alignment to traceability may counteract its very purpose and make it pointless as a tool for enterprise governance.

Further readings