Event Oriented Analysis & Object Oriented Design

As it’s safe to assume that a primary objective of process analysis is to align business concerns (by nature specific and changing) with enterprise architectures (meant to be shared and stable), events could provide a good starting point.

(F. Handoko)
Event, concerns, processing (F. Handoko)

Business Analysis & Application Design

Taking example from the convincing track record of object oriented approaches for systems architectures and software design, the same principles have been tried for business requirements analysis. While that approach can be credited with significant realizations, success usually depends on some prior alignment of business domains with their system counterpart, in particular on the possibility to uniformly and consistently identify and define business entities as objects independently of operating processes.

Alternatively, when business entities cannot not be readily identified upfront as system objects, analysis may start with organization, entitled agents, activities, and be carried out with the definition of business flows and associated entities.

So, and whatever the approach, the question is how to ensure that the applications under consideration are designed in accordance with architecture capabilities.

System Architecture & Software Design

Words are worth the difference they make: as long as systems were not much more than an assortment of software modules, architecture and design could be understood as one and the same. But nowadays a distinction may be overdue between, on one hand the design of software components run within a single system’s address space and time-frame and, on the other hand, architectures of systems set across different spaces and time-frames.

vvv
Architecture vs Design: words are worth the difference they make

Object oriented solutions (e.g Domain Driven Design) are arguably the option of choice for the former, but services oriented approaches may be a better fit for the latter. Not by chance, events provide a sound conceptual hinge between the two approaches.

Event Oriented Analysis vs Object Oriented Design

Object oriented principles can be streamlined around three core topics: (a) information hiding and coupling between structures and methods; (b) inheritance between types; (c) communication through interfaces and polymorphism.

OO principles can be streamlined along three topics: encapsulation (a), inheritance (b), and communication through interfaces (c).
OO principles can be streamlined around three topics: encapsulation (a), inheritance (b), and communication through interfaces (c).

Of these, encapsulation and inheritance are specific to software design, but communication mechanisms are also at the core of services oriented architectures. Considering messages as the logical system counterparts of business events, event-oriented analysis should help to align business processes with systems capabilities.

From a business processes perspective, events are signaling changes in the states of activities, objects, or expectations. Given that  supporting systems are meant to deal with those changes, the analysis of business requirements could proceed from corresponding events:

  • Business events are defined with regard to time-frames (a) and sources to be authenticated and authorized (b).
  • Triggering changes must be described by messages with regard to their functional (c) and operational (d) scope.
  • Business logic (e) and entities (f) are often shared across applications and therefore better defined independently.
  • Internal changes (same space and time-frame) are hidden.
  • Triggered (external) changes are defined with regard to time-frames (h), processes (d), and devices (g).
A simplified blueprint of Event Oriented Process Analysis
A simplified blueprint of Event Oriented Process Analysis

As it happens, those facets can be aligned with OO design ones, with (c) and (d) for communication, (e) and (f) for encapsulation. On a broader perspective they also fit with the growing focus on event-driven applications and service oriented architectures.

From Event Oriented Process Analysis to Service Oriented Architectures

By moving business logic to the background, event-driven analysis fosters polymorphism at enterprise level with corresponding benefits:

  • With regard to business processes, events come with functional and operational requirements set independently of the business logic that will be carried out: trigger (what has changed), role (who is requesting), and message communication semantics (when the system is supposed to deal with the event).
  • With regard to system capabilities messages can be used to align business (aka external) events with system (aka internal) ones independently of the business entities and logic (what is to be done and how).
  • With regard to architecture and design, that approach is to uphold OO principles by dealing separately with polymorphic requests (interfaces) and business logic (methods).

Those benefits appear clearly when capabilities are realized by services defined with regard to business processes (customers), business objects (messages), business logic (contract), and business operations (policy).

Environment (bold) vs Services (italic)
Environment (bold) vs Services (italic)

It must be reminded that services are part of functional architectures and as a consequence cannot be directly addressed by users or devices.

Events & Action Semantics

With events set as modeling anchors, use cases may provide the modeling glue between processes and functional capabilities:

  • Triggering events (a) map changes in business environments (aka external events) to changes in systems objects (aka internal events).
  • Actors (b) map roles in organization to system users.
  • Messages (c) map the semantics of business processes to the semantics of applications (e) and domains (f).
Use cases (orange) provide a comprehensive and consistent mapping from processes (green) to services (blue).
Use cases (orange) provide a comprehensive and consistent mapping from processes (green) to services (blue).

On that basis, the main objective of event-oriented analysis would be to distinguish between communication and business semantics, the former dealing with interactions, the latter with business logic.

Further Reading

Conceptual Models & Abstraction Scales

Following the recent publication of a new standard for conceptual modeling of automation systems (Object-Process Methodology (ISO/PAS 19450:2015) it may be interesting to explore how it relates to abstraction and meta-models.

oskar-schlemmer-at-bahaus
Meta-models are drawn along lean abstraction scales (Oskar Schlemmer )

Models & Meta Models

Just like models are meant to describe sets of actual instances, meta-models are meant to do the same for sets of modeling artifacts independently of their targets. Along that reasoning, conceptual modeling of automation systems could be achieved either with a single language covering all aspects, or with a meta-language dealing with different sets of models, e.g MDA’s computation independent, platform independent, and platform specific models.

Modeling Languages covering technical, functional, and business concerns.
Two alternative options for the modeling of automation systems: unified language, or a meta language covering technical (e.g PSMs), functional (e.g PIMs), and business (e.g CIMs) scopes.

Given a model based engineering framework (e.g MDA), meta-models are generally used to support downstream models transformation targeting designs and code. But when upstream conceptual models are concerned, the challenge is to tackle the knowledge-to-systems transition. For that purpose some shared modeling roof is required for the definition of the symbolic footprint of the targeted business in the automation system under consideration.

Symbolic Footprint

Given that automation systems are meant to manage symbolic objects (aka surrogates), one should expect the distinction between actual instances and their symbolic representations to be the cornerstone of corresponding modeling languages. Along that reasoning, modeling of automation systems should start with the symbolic representation of actual business footprints, namely: the sets of objects, events, and processes, the roles played by agents (aka active objects), and the description of the associated states and rules. Containers would be added for the management of collections.

Automation systems modeling begins with the symbolic representation of actual instances
Automation systems modeling begins with the symbolic representation by systems of actual instances of business related objects and phenomena.

Next, as illustrated by the Object/Agent hierarchy, business worlds are not flat but built from sundry structures and facets to be represented by multiple levels of descriptions. That’s where abstractions are to be introduced.

Abstraction & Variants

The purpose of abstractions is to manage variants, and as such they can be used in two ways:

  • For partial descriptions of actual instances depending on targeted features. That can be achieved using composition (for structural variants) and partitions (for functional ones).
  • As hierarchies of symbolic descriptions (aka types and sub-types) subsuming variants identified at instances level.

On that basis the challenge is to find the level of detail (targeted actual instances) and abstraction (symbolic footprint) that will best describe supporting systems functionalities. Such level will have to meet two conditions:

  1. A minimal number of comprehensive and exclusive categories covering the structural variants of the sets of instances to be uniformly, consistently, and continuously identified by both enterprise and supporting systems.
  2. A consistent but adjustable set of types and sub-types anchored to the core structural categories and covering the functional variants .

Climbing up and down abstraction ladders looking for right levels is arguably the critical part of conceptual modeling, but the search will greatly benefit from the distinction between models and meta-models. Assuming meta-models are meant to ignore domain specific features altogether, they introduce a qualitative gap on abstraction scales as the respective hierarchies of models and meta-models are targeting different kind of instances. The modeling of agents and roles epitomizes the benefits of that distinction.

Abstraction & Meta Models

Taking customers for example, a naive approach would use Customer as a modeling type inheriting from a super-type, e.g Party. But then, if parties are to be uniformly identified (#), that would preclude any agent for playing multiple roles, e.g customer and supplier.

A separate description of parties and roles would clearly be a better option as it would unify the identification of the former without introducing unwarranted constraints on the latter which would then be defined and identified as the realization of a relationship played by a party.

Not surprisingly, that distinction would also be congruent with the one between models and meta-model:

  • Meta-models will describe generic aspects independently of domain-specific considerations, in particular organizational context (units and roles) and interactions with systems (a).
  • Models will define StaffSupplier and Customer according to the semantics of the business considered (b).
Composition, partitions and specialization can be used to detail the symbolic footprint
Composition, partitions and specialization can be used along two different abstraction scales.

That distinction between abstraction scales can also be applied to the conceptual modeling of automation systems.

Abstraction Scales & Conceptual Models

To begin with definitions, conceptual representations could be used for all mental constructs, whereas symbolic representations would be used only for the subset earmarked for communication purposes. That would mean that, contrary to conceptual representations that can be detached of business and enterprise practicalities, symbolic representations are necessarily built on design, and should be assessed accordingly. In our case the aim of such representations would be to describe the exchanges between business processes and supporting systems.

That understanding neatly fits the conceptual modeling of automation systems whose purpose would be to consolidate generic and business specific abstraction scales, the former for symbolic representations of the exchanges between business and systems, the latter symbolic representation of business contents.

At this point it must be noted that the scales are not necessarily aligned in continuity (with meta-models’ being higher and models’ being lower) as their respective ontologies may overlap (Organizational Entity and Party) or cross (Function and Role).

Toward an Ontological Framework for Enterprise Architectures Modeling

Along an analytic perspective, ontologies are meant to determine the categories that can comprehensively and consistently denote the instances of a domain under consideration; applied to enterprise concerns that would entail:

  • Thesaurus: for the whole range of terms and concepts.
  • Documents: for documents with regard to topics.
  • Business: for enterprise organization and business objects and activities.
  • Engineering: for systems surrogates associated to enterprise organization and business objects and activities
Ontologies provide a common conceptual framework for  models and meta-models

That would open the door to a seamless integration of business intelligence, systems engineering, knowledge management, and decision-making.

Further Reading

External Links

IoT & Real Time Activities

The world is the totality of facts, not of things.

Ludwig Wittgenstein

As the so-called internet of things (IoT) seems to bring together people, systems and devices, the meaning of real-time activities may have to be reconsidered.

Real Time Representation (Lucy Nicholson)
Fact and Broadcast (Lucy Nicholson)

Things, Facts, Events

To begin with, as illustrated by marketed solutions like SIGFOX, the IoT can be described as a fast and stripped-down communication layer carrying not so much things than facts and associated raw (i.e non symbolic) events. That seems to cut across traditional understandings because the IoT is dedicated to non symbolic devices yet may include symbolic systems, and fast communication may or may not mean real-time. So, when applications network requirements are to be considered, the focus should be on the way events are meant to be registered and processed.

Business Environments Cannot be Frozen

Given that time-frames are set according primary events, real-time activities can be defined as exclusive ongoing events: their start initiates a proprietary time-frame perceived from the outside as being without duration, i.e as if nothing could happen until their completion, with activities targeting the same domain supposed to be frozen.

ccc
Contrary to operational timing constraints (left), real-time ones (right) are set against the specific (i.e event driven) time-frames of targeted domain.

That principle can be understood as a generalization of the ACID (Atomicity, Consistency, Isolation, Durability) scheme used to guarantee that database transactions are processed reliably. Along that understanding a real-time business transaction would require that, whatever its actual duration, no change from other transactions would be accepted to its domain representation until the business transaction is completed and its associated outcomes duly committed. Yet, the hitch is that, contrary to systems transactions, there is no way to freeze actual business ones which will continue to be carried out notwithstanding suspended registrations.

Accesses can be fully synchronized within DB systems (single clock), suspended within functional architectures, consolidated within environment.
Accesses can be fully synchronized within DB systems (single clock), suspended within functional architectures, consolidated within environment.

In that case the problem is not so much one of locks on DB as one of dynamic alignment of managed representations with the changing state of affairs in their actual counterpart.

Yoking Systems & Environments

As Einstein famously said, “the only reason for time is so that everything doesn’t happen at once”. Along that reasoning coupling constraints for systems can be analyzed with regard to the way events are notified and registered:

  • Input flows: what happens between changes in environment (aka facts) and their recording by applications (a).
  • Processing: could the application be executed fully based on locally available information, or be contingent on some information managed by systems at domain level (b).
  • Output flows: what happens between actions triggered by applications and the corresponding changes in the environment (c).
vvvv
How to analyze the coupling between environment and system.

It’s important to remind that real-time activities are not defined in absolute time units: they can be measured in microsecond as well as in aeons, and carried out by light sensors or by snails.

A Simple Decision Routine

Deciding on real-time requirements can therefore follow a straightforward routine:

  • Should changes in relevant external objects, processes, or expectations, be instantly detected at system’s boundaries ? (a)
  • Could the interpretation and processing of associated events be carried out locally, or be contingent on information shared at domain level ? (b)
  • Should subsequent actions on relevant external objects, processes, or expectations be carried out instantly ? (c)
vvv
Coupling with the environment must be synchronous and footprint local or locked.

Positive answers to the three questions entail real-time requirements, as will also be the case if access to shared information is necessary.

 What about IoT ?

Strictly speaking, the internet of things is characterized by networked connections between non symbolic things. As it entails asynchronous communication and some symbolic mediation in between, one may assume that the IoT cannot support real-time activities. That assumption can be checked with some business cases given as examples.

Further Readings

External Links

Relating to Functions

Preamble

Functions map inputs with outputs as expected from a performing agent whatever its nature (concrete or abstract) or means (physical, logical, or magical).
They are not to be confused with objectives (which don’t necessarily specify performing agents or detail inputs) nor with activities (which purport to describe concrete execution paths).

simon-fujiwara-empire-clock
Functions and Processes are set along orthogonal dimensions (Simon Fujiwara)

Those distinctions are critical when the aim is to align business processes requirements with systems architecture capabilities.

Functions & Processes

Functions are complete (contrary to objectives) and abstract (contrary to activities) descriptions of what organizations (represented by actors), system architectures (represented by services), or objects (through operations) can do. As such they are akin to interfaces or types, and cannot be instanciated on their own. Processes on the contrary describe how activities are executed, i.e instanciated (#).

Business processes describe sets of execution instances (#). Functions describe what can be expected from enterprise or functional architectures. Business logic describe how the flows are to be processed.
Business processes describe sets of execution instances (#). Functions describe what can be expected from enterprise or functional architectures. Business logic describe how the flows are to be processed.

That understanding provides for a modular approach to business processes:

  • Business processes can be defined with regard to business functions independently of the way they are supported.
  • Business rules can be managed independently of the way they are applied, by people or systems.
  • Business logic can be factored out in functions (business or systems) or set within specific processes.

Yet that would not be possible without some modeling across enterprise architecture layers.

Functions & Models

Functions are meant to facilitate reuse across enterprise architectures, which entails descriptions that are clearly and easily accessible: context, modus operandi, expected outcome. Whatever the modeling method(s) in use, it’s safe to assume that different stakeholders across enterprise architectures will pursue different objectives, to be defined with different concepts. If they are to communicate they will need some explicit and unambiguous semantics for the links between processes, functions, and activities:

  • Functional flows are used between processes and functions (a) or actors (d), or between actors and functions (e).
  • Composition or aggregates are used to specify where the business logic is to be employed, by functions (b) or by processes (c).
  • Documentation references (f) are used between unspecified actors and business logic, in case it would performed by people.
Semantics of connectors: functional flows (a,d,e), aggregates (b) and composition (c), and documentation (f).
Semantics of connectors: functional flows (a,d,e), aggregates (b) and composition (c), and documentation (f).

Finally, the semantics of connectors used between functions will have to be consistent with the one used to connect them to processes and activities.

Combining Functions

Considering that functions are neatly set within the systems modeling realm, one would assume that inheritance and structure connectors can be used to detail and combine them. Yet, since functions cannot be instantiated, some paring down can be applied to their semantics:

  • Traditional structure connectors are set with regard to identification: bound to structure for composition, set independently otherwise. Since functions have no instances that criterion is irrelevant and the same reasoning goes for composition.
  • Likewise, since functions have no states to be considered, inheritance of functions can be represented by aggregates.
ddd
Functions can be combined at will using only aggregates

As far as functions are concerned, structures as well as inheritance connectors can be fully and soundly replaced by aggregate ones, which could significantly improve the mapping of business processes, activities, and supporting functions.

Further Readings

Focus: Capabilities vs Processes

Summary

Enterprise architecture being a nascent discipline, its boundaries and categories of concerns are still in the making. Yet, as blurs on pivotal concepts are bound to jeopardize further advances, clarification is called upon for the concept of “capability”, whose meaning seems to dither somewhere between architecture, function and process.

eDeSouza_table
Jumping capability of a four-legged structure (Edgard de Souza)

Hence the benefits of applying definition guidelines to characterize capability with regard to context (architectures) and purpose (alignment between architectures and processes).

Context: Capability & Architecture

Assuming that a capability describes what can be done with a resource, applying the term to architectures would implicitly make them a mix of assets and mechanisms meant to support processes. As a corollary, such understanding would entail a clear distinction between architectures on one hand and supported processes on the other hand; that would, by the way, make an oxymoron of the expression “process architecture”.

On that basis, capabilities could be originally defined independently of business specificity, yet necessarily with regard to architecture context:

  • Business capabilities: what can be achieved given assets (technical, financial, human), organization, and information structures.
  • Systems capabilities: what kind of processes can be supported by systems functionalities.
  • Platforms capabilities: what kind of functionalities can be implemented.
Capabs_L1
Well established concepts are used to describe architecture capabilities

Taking a leaf from the Zachman framework, five core capabilities can be identified cutting across those architecture contexts:

  • Who: authentication and authorization for agents (human or otherwise) and roles dealing with the enterprise, using system functionalities, or connecting through physical entry points.
  • What: structure and semantics of business objects, symbolic representations, and physical records.
  • How: organization and versatility of business rules.
  • Where: physical location of organizational units, processing units, and physical entry points.
  • When: synchronization of process execution with regard to external events.

Being set with regard to architecture levels, those capabilities are inherently holistic and can only pertain to the enterprise as a whole, e.g for bench-marking. Yet that is not enough if the aim is to assess architectures capabilities with regard to supported processes.

Purpose: Capability vs Process

Given that capabilities describe architectural features, they can be defined independently of processes. Pushing the reasoning to its limit, one could, as illustrated by the table above, figure a capability without even the possibility of a process. Nonetheless, as the purpose of capabilities is to align supporting architectures and supported processes, processes must indeed be introduced, and the relationship addressed and assessed.

First of all, it’s important to note that trying to establish a direct mapping between capabilities and processes will be self-defeating as it would fly in the face of architecture understood as a shared construct of assets and mechanisms. Rather, the mapping of processes to architectures is best understood with regard to architecture level: traceable between requirements and applications, designed at system level, holistic at enterprise level.

ArchiCaps
Alignment with processes is mediated by architecture complexity.

Assuming a service oriented architecture, capabilities would be used to align enterprise and system architectures with their process counterparts:

  • Holistic capabilities will be aligned with business objectives set at enterprise level.
  • Services will be aligned with business functions and designed with regard to holistic capabilities.
BP2SOA_Capabs
Services are a perfect match for capabilities

Moreover, with or without service oriented architectures, that approach could still be used to map functional and non functional requirements to architectures capabilities.

Capabs_Reks
Functional requirements are defined with regard to business processes, non functional ones with regard to system capabilities.

The alignment of non-functional requirements with architectures capabilities can be seen as a key factor for enterprise architectures as it draws the line between what can be owned and managed by business units and what must be shared at enterprise level. It must also be noted that non-functional requirements should not be seen as a one-fits-all category but be defined by the footprint of business requirements on technical architecture.

Further Readings

External Links

Alignment for Dummies

Summary

The emergence of Enterprise Architecture as a discipline of its own has put the light on the necessary distinction between actual (aka business) and software (aka system) realms. Yet, despite a profusion of definitions for layers, tiers, levels, views, and other modeling perspectives, what should be a constitutive premise of system engineering remains largely ignored, namely: business and systems concerns are worlds apart and bridging the gap is the main challenge of architects and analysts, whatever their preserve.

jBaldessari_900x450
(Alignment with Dummies (J. Baldessari)

The consequences of that neglect appear clearly when enterprise architects consider the alignment of systems architectures and capabilities on one hand, with enterprise organization and business processes on the other hand. Looking into the grey zone in between, some approaches will line up models according to their structure, assuming the same semantics on both sides of the divide; others will climb up the abstraction ladder until everything will look alike. Not surprisingly, with the core interrogation (i.e “what is to be aligned ?”) removed from the equation, models will be turned into dummies enabling alignment to be carried out by simple pattern matching.

Models & Views

The abundance of definitions for layers, tiers or levels often masks two different understandings of models:

  • When models are understood as symbolic descriptions of sets of instances, each layer targets a different context with a different concern. That’s the basis of the Model Driven Architecture (MDA) and its distinction between Computation Independent Models (CIMs), Platform Independent Models (PIMs), and Platform Specific Models (PSMs)
  • When models are understood as symbolic descriptions built from different perspectives, all layers targets the same context, each with a different concern. Along that understanding each view is associated to a specific aspect or level of abstraction: processes view, functional view, conceptual view, technical view, etc.

As it happens, many alignment schemes use, implicitly or explicitly, the second understanding without clarifying the underlying assumptions regarding the backbone of artifacts. That neglect is unfortunate because, to be of any significance, views will have to be aligned with regard to those artifacts.

What is to be aligned

From a general perspective, and beyond lexical controversies, alignment has to be managed with regard to two basic scales:

  • Architectures: enterprise (concepts), systems (functionalities), and platforms (technologies).
  • Models: conceptual (business context and organization), analysis (symbolic representations), design (physical implementation).

From a practical point of view, alignment is meant to deal with two main problems: how business processes are supported by systems functionalities, and how those functionalities are to be implemented. Given that the latter can be fully dealt with at system level, the focus can be put on the alignment of business processes and functional architectures.

A naive solution could be to assume services on both processes and systems sides. Yet, the apparent symmetry covers a tautology: while aiming for services oriented architectures on the systems side would be legitimate, if not necessarily realistic, taking for granted that business processes also tally with services would presume some prior alignment, in other words that the problem has already been solved.

The pragmatic and logically correct approach is therefore to map business processes to system functionalities using whatever option is available, models (CIMs vs PIMs), or views (processes vs functions). And that is where the distinction between business and software semantics is critical: assuming the divide can be overlooked, some “shallow” alignment could be carried out directly providing the models can be translated into some generic language; but if the divide is acknowledged a “deep” alignment will have to be supported by a semantics bridge built across.

Shallow Alignment

Just like models are meant to describe sets of instances, meta-models are meant to describe instances of models independently of their respective semantics. Assuming a semantic continuity between business and systems models, meta-models like OMG’s KDM (Knowledge Discovery Meta-model) appear to provide a very practical solution to the alignment problem.

From a practical point of view, one may assume that no model of functional architecture is available because otherwise it would be aligned “by design” and there would be no problem. So something has to be “extracted” from existing software components:

  1. Software (aka design) models are translated into functional architectures.
  2. Models of business processes are made compatible with the generic language used for system models.
  3. Associations are made based on patterns identified on each side.

While the contents of the first and third steps are well defined and understood, that’s not the case for the second step which take for granted the availability of some agreed upon modeling semantics to be applied to both functional architecture and business processes. Unfortunately that assumption is both factually and logically inconsistent:

  • Factually inconsistent: it is denied by the plethora of candidates claiming for the role, often with partial, overlapping, ambiguous, or conflicting semantics.
  • Logically inconsistent: it simply dodges the question (what’s the meaning of alignment between business processes and supporting systems) either by lumping together the semantics of the respective contexts and concerns, or by climbing up the ladder of abstraction until all semantic discrepancies are smoothed out.

Alignments built on that basis are necessarily shallow as they deal with artifacts disregarding of their contents, like dummies in test plans. As a matter of fact the outcome will add nothing to traceability, which may be enough for trivial or standalone processes and applications, but is to be meaningless when applied at architecture level.

Deep Alignment

Compared to the shallow one, deep alignment, instead of assuming a wide but shallow commonwealth, tries to identify the minimal set of architectural concepts needed to describe alignment’s stake. Moreover, and contrary to the meta-modelling approach, the objective is not to find some higher level of abstraction encompassing the whole of models, but more reasonably to isolate the core of architecture concepts and constructs with shared and unambiguous meanings to be used by both business and system analysts.

That approach can be directly set along the MDA framework:

Basicam_languages
Deep alignment makes a distinction between what is at stake at architecture level (blue), from the specifics of process or domain (green), and design (brown).
  • Contexts descriptions (UML, DSL, BPM, etc) are not meant to distinguish between architectural constructs and specific ones.
  • Computation independent models (CIMs) describe business objects and processes combining core architectural constructs (using a generic language like UML), with specific business ones. The former can be mapped to functional architecture, the latter (e.g rules) directly transformed into design artifacts.
  • Platform independent models (PIMs) describe functional architectures using core constructs and framework stereotypes, possibly enriched with specific artifacts managed separately.
  • Platform specific models (PSMs) can be obtained through transformation from PIMs, generated using specific languages, or refactored from legacy code.

Alignment can so focus on enterprise and systems architectural stakes leaving the specific concerns dealt with separately, making the best of existing languages.

Alignment & Traceability

As mentioned above, comparing alignment with traceability may help to better understand its meaning and purpose.

  • Traceability is meant to deal with links between development artifacts from requirements to software components. Its main objective is to manage changes in software architecture and support decision-making with regard to maintenance and evolution.
  • Alignment is meant to deal with enterprise objectives and systems capabilities. Its main objective is to manage changes in enterprise architecture and support decision-making with regard to organization and systems architecture.

cccc

As a concluding remark, reducing alignment to traceability may counteract its very purpose and make it pointless as a tool for enterprise governance.

Further readings

EA Documentation: Taking Words for Systems

In so many words

Given the clear-cut and unambiguous nature of software, how to explain the plethora of  “standard” definitions pertaining to systems, not to mention enterprises, architectures ?

Documents and architectures, which grows on the other (Gilles Barbier).
Documents and Systems: which ones nurture the others (Gilles Barbier).

Tentative answers can be found with reference to the core functions documents are meant to support: instrument of governance, medium of exchange, and content storage.

Instrument of Governance: the letter of the law

The primary role of documents is to support the continuity of corporate identity and activities with regard to their regulatory and business environments. Along that perspective documents are to receive legal tender for the definitions of parties (collective or individuals), roles, and contracts. Such documents are meant to support the letter of the law, whether set at government, industry, or corporate level. When set at corporate level that letter may be used to assess the capability and maturity of architectures, organizations, and processes. Whatever the level, and given their role for legal tender or assessment, those documents have to rely on formal textual definitions, possibly supplemented with models.

Medium of Exchange: the spirit of the law

Independently of their formal role, documents are used as medium of exchange, across corporate entities as well as internally between their organizational units. When freed from legal or governance duties, such documents don’t have to carry authorized or frozen interpretations and assorted meanings can be discussed and consolidated in line with the spirit of the law. That makes room for model-based documents standing on their own, with textual definitions possibly set in the background. Given the importance of direct discussions in the interpretation of their contents, documents used as medium of (immediate) exchange should not be confused with those used as means of storage (exchange along time).

Means of Storage: letter only

Whatever their customary functions, documents can be used to store contents to be reinstated at a later stage. In that case, and contrary to direct (aka immediate) exchange, interpretations cannot be consolidated through discussion but have to stand on the letter of the documents themselves. When set by regulatory or organizational processes, canonical interpretations can be retrieved from primary contexts, concerns, or pragmatics. But things can be more problematic when storage is performed for its own purpose, without formal reference context. That can be illustrated by legacy applications with binary code can be accompanied by self-documented source code, source with documentation, source with requirements, generated source with models, etc.

Documentation and Enterprise Architecture

Assuming that the governance of structured social organizations must be supported by comprehensive documentation, documents must be seen as a necessary and intrinsic component of enterprise architectures and their design should be aligned on concerns and capabilities.

As noted above, each of the basic functionalities comes with specific constraints; as a consequence a sound documentation policy should not mix functionalities. On that basis, documents should be defined by mapping purposes with users across enterprise architecture layers:

  • With regard to corporate environment, documentation requirements are set by legal constraints, directly (regulations and contracts) or indirectly (customary framework for transactions, traceability and audit).
  • With regard to organization, documents have to met two different objectives. As a medium of exchange they are meant to support the collaboration between organizational units, both at business level (processes) and across architecture levels. As an instrument of governance they are used to assess architecture capabilities and processes performances. Documents supporting those objectives are best kept separate if negative side effects are to be avoided.
  • With regard to systems functionalities, documents can be introduced for procurements (governance), development (exchange), and change (storage).
  • Within systems, the objective is to support operational deployment and maintenance of software components.
Documents’ purposes and users
Documents’ purposes and users

The next step will be to integrate documents pertaining to actual environments and organization (brown background) with those targeting symbolic artifacts (blue background).

EmergA_ActSmb
Models are used to describe actual or symbolic objects and behaviors

That could be achieved with MBE/MDA approaches.

Further readings