As it’s safe to assume that a primary objective of process analysis is to align business concerns (by nature specific and changing) with enterprise architectures (meant to be shared and stable), events could provide a good starting point.
Event, concerns, processing (F. Handoko)
Business Analysis & Application Design
Taking example from the convincing track record of object oriented approaches for systems architectures and software design, the same principles have been tried for business requirements analysis. While that approach can be credited with significant realizations, success usually depends on some prior alignment of business domains with their system counterpart, in particular on the possibility to uniformly and consistently identify and define business entities as objects independently of operating processes.
Alternatively, when business entities cannot not be readily identified upfront as system objects, analysis may start with organization, entitled agents, activities, and be carried out with the definition of business flows and associated entities.
So, and whatever the approach, the question is how to ensure that the applications under consideration are designed in accordance with architecture capabilities.
System Architecture & Software Design
Words are worth the difference they make: as long as systems were not much more than an assortment of software modules, architecture and design could be understood as one and the same. But nowadays a distinction may be overdue between, on one hand the design of software components run within a single system’s address space and time-frame and, on the other hand, architectures of systems set across different spaces and time-frames.
Architecture vs Design: words are worth the difference they make
Object oriented solutions (e.g Domain Driven Design) are arguably the option of choice for the former, but services oriented approaches may be a better fit for the latter. Not by chance, events provide a sound conceptual hinge between the two approaches.
Event Oriented Analysis vs Object Oriented Design
Object oriented principles can be streamlined around three core topics: (a) information hiding and coupling between structures and methods; (b) inheritance between types; (c) communication through interfaces and polymorphism.
OO principles can be streamlined around three topics: encapsulation (a), inheritance (b), and communication through interfaces (c).
Of these, encapsulation and inheritance are specific to software design, but communication mechanisms are also at the core of services oriented architectures. Considering messages as the logical system counterparts of business events, event-oriented analysis should help to align business processes with systems capabilities.
From a business processes perspective, events are signaling changes in the states of activities, objects, or expectations. Given that supporting systems are meant to deal with those changes, the analysis of business requirements could proceed from corresponding events:
Business events are defined with regard to time-frames (a) and sources to be authenticated and authorized (b).
Triggering changes must be described by messages with regard to their functional (c) and operational (d) scope.
Business logic (e) and entities (f) are often shared across applications and therefore better defined independently.
Internal changes (same space and time-frame) are hidden.
Triggered (external) changes are defined with regard to time-frames (h), processes (d), and devices (g).
A simplified blueprint of Event Oriented Process Analysis
As it happens, those facets can be aligned with OO design ones, with (c) and (d) for communication, (e) and (f) for encapsulation. On a broader perspective they also fit with the growing focus on event-driven applications and service oriented architectures.
From Event Oriented Process Analysis to Service Oriented Architectures
By moving business logic to the background, event-driven analysis fosters polymorphism at enterprise level with corresponding benefits:
With regard to business processes, events come with functional and operational requirements set independently of the business logic that will be carried out: trigger (what has changed), role (who is requesting), and message communication semantics (when the system is supposed to deal with the event).
With regard to system capabilities messages can be used to align business (aka external) events with system (aka internal) ones independently of the business entities and logic (what is to be done and how).
With regard to architecture and design, that approach is to uphold OO principles by dealing separately with polymorphic requests (interfaces) and business logic (methods).
Those benefits appear clearly when capabilities are realized by services defined with regard to business processes (customers), business objects (messages), business logic (contract), and business operations (policy).
Environment (bold) vs Services (italic)
It must be reminded that services are part of functional architectures and as a consequence cannot be directly addressed by users or devices.
Events & Action Semantics
With events set as modeling anchors, use cases may provide the modeling glue between processes and functional capabilities:
Triggering events (a) map changes in business environments (aka external events) to changes in systems objects (aka internal events).
Actors (b) map roles in organization to system users.
Messages (c) map the semantics of business processes to the semantics of applications (e) and domains (f).
Use cases (orange) provide a comprehensive and consistent mapping from processes (green) to services (blue).
On that basis, the main objective of event-oriented analysis would be to distinguish between communication and business semantics, the former dealing with interactions, the latter with business logic.
Following the recent publication of a new standard for conceptual modeling of automation systems (Object-Process Methodology (ISO/PAS 19450:2015) it may be interesting to explore how it relates to abstraction and meta-models.
Meta-models are drawn along lean abstraction scales (Oskar Schlemmer )
Models & Meta Models
Just like models are meant to describe sets of actual instances, meta-models are meant to do the same for sets of modeling artifacts independently of their targets. Along that reasoning, conceptual modeling of automation systems could be achieved either with a single language covering all aspects, or with a meta-language dealing with different sets of models, e.g MDA’s computation independent, platform independent, and platform specific models.
Two alternative options for the modeling of automation systems: unified language, or a meta language covering technical (e.g PSMs), functional (e.g PIMs), and business (e.g CIMs) scopes.
Given a model based engineering framework (e.g MDA), meta-models are generally used to support downstream models transformation targeting designs and code. But when upstream conceptual models are concerned, the challenge is to tackle the knowledge-to-systems transition. For that purpose some shared modeling roof is required for the definition of the symbolic footprint of the targeted business in the automation system under consideration.
Symbolic Footprint
Given that automation systems are meant to manage symbolic objects (aka surrogates), one should expect the distinction between actual instances and their symbolic representations to be the cornerstone of corresponding modeling languages. Along that reasoning, modeling of automation systems should start with the symbolic representation of actual business footprints, namely: the sets of objects, events, and processes, the roles played by agents (aka active objects), and the description of the associated states and rules. Containers would be added for the management of collections.
Automation systems modeling begins with the symbolic representation by systems of actual instances of business related objects and phenomena.
Next, as illustrated by the Object/Agent hierarchy, business worlds are not flat but built from sundry structures and facets to be represented by multiple levels of descriptions. That’s where abstractions are to be introduced.
Abstraction & Variants
The purpose of abstractions is to manage variants, and as such they can be used in two ways:
For partial descriptions of actual instances depending on targeted features. That can be achieved using composition (for structural variants) and partitions (for functional ones).
As hierarchies of symbolic descriptions (aka types and sub-types) subsuming variants identified at instances level.
On that basis the challenge is to find the level of detail (targeted actual instances) and abstraction (symbolic footprint) that will best describe supporting systems functionalities. Such level will have to meet two conditions:
A minimal number of comprehensive and exclusive categories covering the structural variants of the sets of instances to be uniformly, consistently, and continuously identified by both enterprise and supporting systems.
A consistent but adjustable set of types and sub-types anchored to the core structural categories and covering the functional variants .
Climbing up and down abstraction ladders looking for right levels is arguably the critical part of conceptual modeling, but the search will greatly benefit from the distinction between models and meta-models. Assuming meta-models are meant to ignore domain specific features altogether, they introduce a qualitative gap on abstraction scales as the respective hierarchies of models and meta-models are targeting different kind of instances. The modeling of agents and roles epitomizes the benefits of that distinction.
Abstraction & Meta Models
Taking customers for example, a naive approach would use Customer as a modeling type inheriting from a super-type, e.g Party. But then, if parties are to be uniformly identified (#), that would preclude any agent for playing multiple roles, e.g customer and supplier.
A separate description of parties and roles would clearly be a better option as it would unify the identification of the former without introducing unwarranted constraints on the latter which would then be defined and identified as the realization of a relationship played by a party.
Not surprisingly, that distinction would also be congruent with the one between models and meta-model:
Meta-models will describe generic aspects independently of domain-specific considerations, in particular organizational context (units and roles) and interactions with systems (a).
Models will define Staff, Supplier and Customer according to the semantics of the business considered (b).
Composition, partitions and specialization can be used along two different abstraction scales.
That distinction between abstraction scales can also be applied to the conceptual modeling of automation systems.
Abstraction Scales & Conceptual Models
To begin with definitions, conceptual representations could be used for all mental constructs, whereas symbolic representations would be used only for the subset earmarked for communication purposes. That would mean that, contrary to conceptual representations that can be detached of business and enterprise practicalities, symbolic representations are necessarily built on design, and should be assessed accordingly. In our case the aim of such representations would be to describe the exchanges between business processes and supporting systems.
That understanding neatly fits the conceptual modeling of automation systems whose purpose would be to consolidate generic and business specific abstraction scales, the former for symbolic representations of the exchanges between business and systems, the latter symbolic representation of business contents.
At this point it must be noted that the scales are not necessarily aligned in continuity (with meta-models’ being higher and models’ being lower) as their respective ontologies may overlap (Organizational Entity and Party) or cross (Function and Role).
Toward an Ontological Framework for Enterprise Architectures Modeling
Along an analytic perspective, ontologies are meant to determine the categoriesthat can comprehensively and consistently denote the instances of a domain under consideration; applied to enterprise concerns that would entail:
Thesaurus: for the whole range of terms and concepts.
Documents: for documents with regard to topics.
Business: for enterprise organization and business objects and activities.
Engineering: for systems surrogates associated to enterprise organization and business objects and activities
Ontologies provide a common conceptual framework for models and meta-models
That would open the door to a seamless integration of business intelligence, systems engineering, knowledge management, and decision-making.
The world is the totality of facts, not of things.
Ludwig Wittgenstein
As the so-called internet of things (IoT) seems to bring together people, systems and devices, the meaning of real-time activities may have to be reconsidered.
Fact and Broadcast (Lucy Nicholson)
Things, Facts, Events
To begin with, as illustrated by marketed solutions like SIGFOX, the IoT can be described as a fast and stripped-down communication layer carrying not so much things than facts and associated raw (i.e non symbolic) events. That seems to cut across traditional understandings because the IoT is dedicated to non symbolic devices yet may include symbolic systems, and fast communication may or may not mean real-time. So, when applications network requirements are to be considered, the focus should be on the way events are meant to be registered and processed.
Business Environments Cannot be Frozen
Given that time-frames are set according primary events, real-time activities can be defined as exclusive ongoing events: their start initiates a proprietary time-frame perceived from the outside as being without duration, i.e as if nothing could happen until their completion, with activities targeting the same domain supposed to be frozen.
Contrary to operational timing constraints (left), real-time ones (right) are set against the specific (i.e event driven) time-frames of targeted domain.
That principle can be understood as a generalization of the ACID (Atomicity, Consistency, Isolation, Durability) scheme used to guarantee that database transactions are processed reliably. Along that understanding a real-time business transaction would require that, whatever its actual duration, no change from other transactions would be accepted to its domain representation until the business transaction is completed and its associated outcomes duly committed. Yet, the hitch is that, contrary to systems transactions, there is no way to freeze actual business ones which will continue to be carried out notwithstanding suspended registrations.
Accesses can be fully synchronized within DB systems (single clock), suspended within functional architectures, consolidated within environment.
In that case the problem is not so much one of locks on DB as one of dynamic alignment of managed representations with the changing state of affairs in their actual counterpart.
Yoking Systems & Environments
As Einstein famously said, “the only reason for time is so that everything doesn’t happen at once”. Along that reasoning coupling constraints for systems can be analyzed with regard to the way events are notified and registered:
Input flows: what happens between changes in environment (aka facts) and their recording by applications (a).
Processing: could the application be executed fully based on locally available information, or be contingent on some information managed by systems at domain level (b).
Output flows: what happens between actions triggered by applications and the corresponding changes in the environment (c).
How to analyze the coupling between environment and system.
It’s important to remind that real-time activities are not defined in absolute time units: they can be measured in microsecond as well as in aeons, and carried out by light sensors or by snails.
A Simple Decision Routine
Deciding on real-time requirements can therefore follow a straightforward routine:
Should changes in relevant external objects, processes, or expectations, be instantly detected at system’s boundaries ? (a)
Could the interpretation and processing of associated events be carried out locally, or be contingent on information shared at domain level ? (b)
Should subsequent actions on relevant external objects, processes, or expectations be carried out instantly ? (c)
Coupling with the environment must be synchronous and footprint local or locked.
Positive answers to the three questions entail real-time requirements, as will also be the case if access to shared information is necessary.
What about IoT ?
Strictly speaking, the internet of things is characterized by networked connections between non symbolic things. As it entails asynchronous communication and some symbolic mediation in between, one may assume that the IoT cannot support real-time activities. That assumption can be checked with some business cases given as examples.
Functions map inputs with outputs as expected from a performing agent whatever its nature (concrete or abstract) or means (physical, logical, or magical).
They are not to be confused with objectives (which don’t necessarily specify performing agents or detail inputs) nor with activities (which purport to describe concrete execution paths).
Functions and Processes are set along orthogonal dimensions (Simon Fujiwara)
Functions are complete (contrary to objectives) and abstract (contrary to activities) descriptions of what organizations (represented by actors), system architectures (represented by services), or objects (through operations) can do. As such they are akin to interfaces or types, and cannot be instanciated on their own. Processes on the contrary describe how activities are executed, i.e instanciated (#).
Business processes describe sets of execution instances (#). Functions describe what can be expected from enterprise or functional architectures. Business logic describe how the flows are to be processed.
That understanding provides for a modular approach to business processes:
Business processes can be defined with regard to business functions independently of the way they are supported.
Business rules can be managed independently of the way they are applied, by people or systems.
Business logic can be factored out in functions (business or systems) or set within specific processes.
Yet that would not be possible without some modeling across enterprise architecture layers.
Functions & Models
Functions are meant to facilitate reuse across enterprise architectures, which entails descriptions that are clearly and easily accessible: context, modus operandi, expected outcome. Whatever the modeling method(s) in use, it’s safe to assume that different stakeholders across enterprise architectures will pursue different objectives, to be defined with different concepts. If they are to communicate they will need some explicit and unambiguous semantics for the links between processes, functions, and activities:
Functional flows are used between processes and functions (a) or actors (d), or between actors and functions (e).
Composition or aggregates are used to specify where the business logic is to be employed, by functions (b) or by processes (c).
Documentation references (f) are used between unspecified actors and business logic, in case it would performed by people.
Semantics of connectors: functional flows (a,d,e), aggregates (b) and composition (c), and documentation (f).
Finally, the semantics of connectors used between functions will have to be consistent with the one used to connect them to processes and activities.
Combining Functions
Considering that functions are neatly set within the systems modeling realm, one would assume that inheritance and structure connectors can be used to detail and combine them. Yet, since functions cannot be instantiated, some paring down can be applied to their semantics:
Traditional structure connectors are set with regard to identification: bound to structure for composition, set independently otherwise. Since functions have no instances that criterion is irrelevant and the same reasoning goes for composition.
Likewise, since functions have no states to be considered, inheritance of functions can be represented by aggregates.
Functions can be combined at will using only aggregates
As far as functions are concerned, structures as well as inheritance connectors can be fully and soundly replaced by aggregate ones, which could significantly improve the mapping of business processes, activities, and supporting functions.
Enterprise architecture being a nascent discipline, its boundaries and categories of concerns are still in the making. Yet, as blurs on pivotal concepts are bound to jeopardize further advances, clarification is called upon for the concept of “capability”, whose meaning seems to dither somewhere between architecture, function and process.
Jumping capability of a four-legged structure (Edgard de Souza)
Hence the benefits of applying definition guidelines to characterize capability with regard to context (architectures) and purpose (alignment between architectures and processes).
Context: Capability & Architecture
Assuming that a capability describes what can be done with a resource, applying the term to architectures would implicitly make them a mix of assets and mechanisms meant to support processes. As a corollary, such understanding would entail a clear distinction between architectures on one hand and supported processes on the other hand; that would, by the way, make an oxymoron of the expression “process architecture”.
On that basis, capabilities could be originally defined independently of business specificity, yet necessarily with regard to architecture context:
Business capabilities: what can be achieved given assets (technical, financial, human), organization, and information structures.
Systems capabilities: what kind of processes can be supported by systems functionalities.
Platforms capabilities: what kind of functionalities can be implemented.
Well established concepts are used to describe architecture capabilities
Taking a leaf from the Zachman framework, five core capabilities can be identified cutting across those architecture contexts:
Who: authentication and authorization for agents (human or otherwise) and roles dealing with the enterprise, using system functionalities, or connecting through physical entry points.
What: structure and semantics of business objects, symbolic representations, and physical records.
How: organization and versatility of business rules.
Where: physical location of organizational units, processing units, and physical entry points.
When: synchronization of process execution with regard to external events.
Being set with regard to architecture levels, those capabilities are inherently holistic and can only pertain to the enterprise as a whole, e.g for bench-marking. Yet that is not enough if the aim is to assess architectures capabilities with regard to supported processes.
Purpose: Capability vs Process
Given that capabilities describe architectural features, they can be defined independently of processes. Pushing the reasoning to its limit, one could, as illustrated by the table above, figure a capability without even the possibility of a process. Nonetheless, as the purpose of capabilities is to align supporting architectures and supported processes, processes must indeed be introduced, and the relationship addressed and assessed.
First of all, it’s important to note that trying to establish a direct mapping between capabilities and processes will be self-defeating as it would fly in the face of architecture understood as a shared construct of assets and mechanisms. Rather, the mapping of processes to architectures is best understood with regard to architecture level: traceable between requirements and applications, designed at system level, holistic at enterprise level.
Alignment with processes is mediated by architecture complexity.
Assuming a service oriented architecture, capabilities would be used to align enterprise and system architectures with their process counterparts:
Holistic capabilities will be aligned with business objectives set at enterprise level.
Services will be aligned with business functions and designed with regard to holistic capabilities.
Services are a perfect match for capabilities
Moreover, with or without service oriented architectures, that approach could still be used to map functional and non functional requirements to architectures capabilities.
Functional requirements are defined with regard to business processes, non functional ones with regard to system capabilities.
The alignment of non-functional requirements with architectures capabilities can be seen as a key factor for enterprise architectures as it draws the line between what can be owned and managed by business units and what must be shared at enterprise level. It must also be noted that non-functional requirements should not be seen as a one-fits-all category but be defined by the footprint of business requirements on technical architecture.
The emergence of Enterprise Architecture as a discipline of its own has put the light on the necessary distinction between actual (aka business) and software (aka system) realms. Yet, despite a profusion of definitions for layers, tiers, levels, views, and other modeling perspectives, what should be a constitutive premise of system engineering remains largely ignored, namely: business and systems concerns are worlds apart and bridging the gap is the main challenge of architects and analysts, whatever their preserve.
(Alignment with Dummies (J. Baldessari)
The consequences of that neglect appear clearly when enterprise architects consider the alignment of systems architectures and capabilities on one hand, with enterprise organization and business processes on the other hand. Looking into the grey zone in between, some approaches will line up models according to their structure, assuming the same semantics on both sides of the divide; others will climb up the abstraction ladder until everything will look alike. Not surprisingly, with the core interrogation (i.e “what is to be aligned ?”) removed from the equation, models will be turned into dummies enabling alignment to be carried out by simple pattern matching.
Models & Views
The abundance of definitions for layers, tiers or levels often masks two different understandings of models:
When models are understood as symbolic descriptions of sets of instances, each layer targets a different context with a different concern. That’s the basis of the Model Driven Architecture (MDA) and its distinction between Computation Independent Models (CIMs), Platform Independent Models (PIMs), and Platform Specific Models (PSMs)
When models are understood as symbolic descriptions built from different perspectives, all layers targets the same context, each with a different concern. Along that understanding each view is associated to a specific aspect or level of abstraction: processes view, functional view, conceptual view, technical view, etc.
As it happens, many alignment schemes use, implicitly or explicitly, the second understanding without clarifying the underlying assumptions regarding the backbone of artifacts. That neglect is unfortunate because, to be of any significance, views will have to be aligned with regard to those artifacts.
What is to be aligned
From a general perspective, and beyond lexical controversies, alignment has to be managed with regard to two basic scales:
Architectures: enterprise (concepts), systems (functionalities), and platforms (technologies).
From a practical point of view, alignment is meant to deal with two main problems: how business processes are supported by systems functionalities, and how those functionalities are to be implemented. Given that the latter can be fully dealt with at system level, the focus can be put on the alignment of business processes and functional architectures.
A naive solution could be to assume services on both processes and systems sides. Yet, the apparent symmetry covers a tautology: while aiming for services oriented architectures on the systems side would be legitimate, if not necessarily realistic, taking for granted that business processes also tally with services would presume some prior alignment, in other words that the problem has already been solved.
The pragmatic and logically correct approach is therefore to map business processes to system functionalities using whatever option is available, models (CIMs vs PIMs), or views (processes vs functions). And that is where the distinction between business and software semantics is critical: assuming the divide can be overlooked, some “shallow” alignment could be carried out directly providing the models can be translated into some generic language; but if the divide is acknowledged a “deep” alignment will have to be supported by a semantics bridge built across.
Shallow Alignment
Just like models are meant to describe sets of instances, meta-models are meant to describe instances of models independently of their respective semantics. Assuming a semantic continuity between business and systems models, meta-models like OMG’s KDM (Knowledge Discovery Meta-model) appear to provide a very practical solution to the alignment problem.
From a practical point of view, one may assume that no model of functional architecture is available because otherwise it would be aligned “by design” and there would be no problem. So something has to be “extracted” from existing software components:
Software (aka design) models are translated into functional architectures.
Models of business processes are made compatible with the generic language used for system models.
Associations are made based on patterns identified on each side.
While the contents of the first and third steps are well defined and understood, that’s not the case for the second step which take for granted the availability of some agreed upon modeling semantics to be applied to both functional architecture and business processes. Unfortunately that assumption is both factually and logically inconsistent:
Factually inconsistent: it is denied by the plethora of candidates claiming for the role, often with partial, overlapping, ambiguous, or conflicting semantics.
Logically inconsistent: it simply dodges the question (what’s the meaning of alignment between business processes and supporting systems) either by lumping together the semantics of the respective contexts and concerns, or by climbing up the ladder of abstraction until all semantic discrepancies are smoothed out.
Alignments built on that basis are necessarily shallow as they deal with artifacts disregarding of their contents, like dummies in test plans. As a matter of fact the outcome will add nothing to traceability, which may be enough for trivial or standalone processes and applications, but is to be meaningless when applied at architecture level.
Deep Alignment
Compared to the shallow one, deep alignment, instead of assuming a wide but shallow commonwealth, tries to identify the minimal set of architectural concepts needed to describe alignment’s stake. Moreover, and contrary to the meta-modelling approach, the objective is not to find some higher level of abstraction encompassing the whole of models, but more reasonably to isolate the core of architecture concepts and constructs with shared and unambiguous meanings to be used by both business and system analysts.
That approach can be directly set along the MDA framework:
Deep alignment makes a distinction between what is at stake at architecture level (blue), from the specifics of process or domain (green), and design (brown).
Contexts descriptions (UML, DSL, BPM, etc) are not meant to distinguish between architectural constructs and specific ones.
Computation independent models (CIMs) describe business objects and processes combining core architectural constructs (using a generic language like UML), with specific business ones. The former can be mapped to functional architecture, the latter (e.g rules) directly transformed into design artifacts.
Platform independent models (PIMs) describe functional architectures using core constructs and framework stereotypes, possibly enriched with specific artifacts managed separately.
Platform specific models (PSMs) can be obtained through transformation from PIMs, generated using specific languages, or refactored from legacy code.
Alignment can so focus on enterprise and systems architectural stakes leaving the specific concerns dealt with separately, making the best of existing languages.
Alignment & Traceability
As mentioned above, comparing alignment with traceability may help to better understand its meaning and purpose.
Traceability is meant to deal with links between development artifacts from requirements to software components. Its main objective is to manage changes in software architecture and support decision-making with regard to maintenance and evolution.
Alignment is meant to deal with enterprise objectives and systems capabilities. Its main objective is to manage changes in enterprise architecture and support decision-making with regard to organization and systems architecture.
As a concluding remark, reducing alignment to traceability may counteract its very purpose and make it pointless as a tool for enterprise governance.
Given the clear-cut and unambiguous nature of software, how to explain the plethora of “standard” definitions pertaining to systems, not to mention enterprises, architectures ?
Documents and Systems: which ones nurture the others (Gilles Barbier).
Tentative answers can be found with reference to the core functions documents are meant to support: instrument of governance, medium of exchange, and content storage.
Instrument of Governance: the letter of the law
The primary role of documents is to support the continuity of corporate identity and activities with regard to their regulatory and business environments. Along that perspective documents are to receive legal tender for the definitions of parties (collective or individuals), roles, and contracts. Such documents are meant to support the letter of the law, whether set at government, industry, or corporate level. When set at corporate level that letter may be used to assess the capability and maturity of architectures, organizations, and processes. Whatever the level, and given their role for legal tender or assessment, those documents have to rely on formal textual definitions, possibly supplemented with models.
Medium of Exchange: the spirit of the law
Independently of their formal role, documents are used as medium of exchange, across corporate entities as well as internally between their organizational units. When freed from legal or governance duties, such documents don’t have to carry authorized or frozen interpretations and assorted meanings can be discussed and consolidated in line with the spirit of the law. That makes room for model-based documents standing on their own, with textual definitions possibly set in the background. Given the importance of direct discussions in the interpretation of their contents, documents used as medium of (immediate) exchange should not be confused with those used as means of storage (exchange along time).
Means of Storage: letter only
Whatever their customary functions, documents can be used to store contents to be reinstated at a later stage. In that case, and contrary to direct (aka immediate) exchange, interpretations cannot be consolidated through discussion but have to stand on the letter of the documents themselves. When set by regulatory or organizational processes, canonical interpretations can be retrieved from primary contexts, concerns, or pragmatics. But things can be more problematic when storage is performed for its own purpose, without formal reference context. That can be illustrated by legacy applications with binary code can be accompanied by self-documented source code, source with documentation, source with requirements, generated source with models, etc.
Documentation and Enterprise Architecture
Assuming that the governance of structured social organizations must be supported by comprehensive documentation, documents must be seen as a necessary and intrinsic component of enterprise architectures and their design should be aligned on concerns and capabilities.
As noted above, each of the basic functionalities comes with specific constraints; as a consequence a sound documentation policy should not mix functionalities. On that basis, documents should be defined by mapping purposes with users across enterprise architecture layers:
With regard to corporate environment, documentation requirements are set by legal constraints, directly (regulations and contracts) or indirectly (customary framework for transactions, traceability and audit).
With regard to organization, documents have to met two different objectives. As a medium of exchange they are meant to support the collaboration between organizational units, both at business level (processes) and across architecture levels. As an instrument of governance they are used to assess architecture capabilities and processes performances. Documents supporting those objectives are best kept separate if negative side effects are to be avoided.
With regard to systems functionalities, documents can be introduced for procurements (governance), development (exchange), and change (storage).
Within systems, the objective is to support operational deployment and maintenance of software components.
Documents’ purposes and users
The next step will be to integrate documents pertaining to actual environments and organization (brown background) with those targeting symbolic artifacts (blue background).
Models are used to describe actual or symbolic objects and behaviors
[Enterprise architects] “…are like sailors who have to rebuild their ship on the open sea, without ever being able to dismantle it in dry dock and reconstruct it from the best components.”
Otto Neurath
Objective
Modeling is all too often a flight for abstraction when analysts should instead get their bearings and look for the proper level of representation, i.e the one best fitting their concerns. As a consequence, many debates that seem baffling when revolving around abstraction levels may suddenly clear up when reset in terms of artifacts and symbolic representations.
Models, artifacts, and the emergence of designs (R. Magritte)
That is especially the case for enterprise architectures which, contrary to system ones, cannot be reduced to planned designs but seem to emerge from a mix of cultural sediments, economic factors, technology constraints, and strategic planning.
Hence the need to understand the relationships between enterprise contexts, organization and processes on one hand, and their symbolic counterparts in systems on the other hand.
Artifacts & Models
When architectures are considered, a distinction should first be made between artifacts (e.g buildings) and models (blueprints), the former being manufactured objects designed and built on purpose, the latter symbolic artifacts reflecting those purposes and how to meet them.
Blueprints are used to design and build physical objects according to purposes.
That distinction between artifacts and symbolic descriptions is easy to make for physical objects built on plans, less so for symbolic objects which are artifacts of their own and as such are begot from symbolic descriptions. In other words symbolic artifacts crop up as designs as well as final products.
Symbolic artifacts have to be designed before being implemented as objects of their own.
Moreover, artifacts being used in contexts, their description must also include modus operandi. For enterprises that would mean business objectives, organization, and processes.
Business process: how to use artifacts and manage associated information.
Two kinds of models can be used to figure out actual contexts and activities with their symbolic counterpart in enterprise systems:
Models of business contexts and processes are descriptive as their aim is to build categories of actual or planned objects, assets, and activities.
Models of systems and software are prescriptive as their aim is to design and build the symbolic artifacts used by systems to represent business objects and processes.
Actual (orange) and symbolic (blue) views correspond to technical and software architectures.
That distinction can lend support to the main challenge of enterprise architects, namely the seamless and dynamic alignment of enterprise objectives, assets, and organization on one hand, supporting systems on the other hand.
Architecture & Design
Architecture and design may have a number of overlapping features yet they clearly differ with regard to software: contrary to architecture, software design is meant to fully describe how to implement system components. That difference is especially meaningful for enterprise architecture:
At enterprise level models are used to describe objects and activities from a business perspective, independently of their representation by system components. Whatever the nature of targeted objects and activities (physical or symbolic, current or planned), models are meant to describe business units (actual or required) identified and managed at enterprise level.
At system level models are used to describe software components. Given that systems are meant to represent business contexts and support business processes, their architecture has to be aligned on the units managed at enterprise level.
Assuming that functional, persistency, and execution units must be uniquely and consistently identified at both enterprise and systems level, their respective models have to share some common infrastructure.
Architecture models overlap for enterprise and systems, design models are only used for systems.
The overlapping of models with regard to enterprise and systems architectures and their yoking into systems design determine the background of architectures transformations.
Abstractions and Changes
If some continuity is to be maintained across architectures mutations, modeling abstractions are needed to frame and consolidate changes at both enterprise and system levels.
From the enterprise standpoint the primary factor is the continuity and consistency of corporate identity and activities. For that purpose abstractions will have to target functional, persistency, and execution units. Definitions of those abstract units will provide the backbone of enterprise architecture (a). That backbone can then be independently fleshed out with features providing identified structures of objects and activities are not affected (b).
From the systems standpoint the objective is the alignment of system and enterprise units on one hand, the effectiveness of technical architecture on the other hand. For that purpose abstract architecture units (reflecting enterprise units) are mapped to system units (c), whose design will be carried on independently (d).
Identified enterprise units (a) are detailed (b), (c) to be further designed (d).
That should determine the right level of abstraction, namely when corresponding abstract units can be used to align enterprise and systems ones.
Once securely locked to a common architecture backbone, enterprise and system models can be expanded according to their respective concerns, business and organization for the former, technology and platforms implementation for the latter. On that basis primary changes can be analyzed in terms of specialization and extension.
Specialization will change the local features of enterprise or systems units without affecting their identification or semantics at architecture level:
With regard to enterprise, entry points (a1), features (a2), business rules (a3), and control rules (a4) will be added, modified or removed.
With regard to systems, designs will be modified or new ones introduced in response to changes in enterprise or technological environments.
Basic architectural changes (enterprise level)
Contrary to specialization, architecture extension changes enterprise or systems units in ways affecting their identification, semantics or implementation at architecture level:
With regard to enterprise, entry points locations (b1), semantic domains (b2), business applications (b3), and processes (b4) will be added, modified or removed
With regard to systems, changes in platforms implementations following new technologies or operational requirements.
Hence, while specialization will not affect the architecture backbone, that’s not the case for extension. More critically, the impact of extensions may not be limited to basic changes to backbones as inheritance may also affect the identification mechanisms and semantics of existing units. That happens when abstract descriptions are introduced for aspects that cannot be identified on their own but only when associated to some identified object or behavior.
That can be illustrated by a banking example of a transition from account-based to customer-based management:
To begin with, let’s assume a single process for accounts, with customers represented as aspects of accounts.
Then, in order to support customers relationship management, customers become entities of their own, identified independently of accounts.
Finally, roles and types are introduced as abstract descriptions (not identified on their own) in order to characterize actual parties (customer, supplier, etc) and accounts (current, savings, insurance, etc).
When architectures grow extension can change identification mechanisms and semantics
That modeling shift from concrete to abstract descriptions can be seen as the hinge connecting changes in systems and enterprise architectures.
Eppur si muove
As enterprises grow and extend, architectures become more complex and have to be supported by symbolic representations of whatever is needed for their management: assets, roles, activities, mechanisms, etc. As a consequence, models of enterprise architectures have to deal with two kinds of targets, actual assets and processes on one hand, their symbolic representation as system objects on the other hand.
This apparent symmetry can be misleading as the former models are meant to reflect a reality but the latter ones are used to produce one. In other words there is no guarantee that their alignment can be comprehensively and continuously maintained. Yet, as Galileo purportedly once said of the Earth circling the Sun despite models of the contrary, it moves. So, what are the primary factors behind moves in enterprise architectures ?
What moves first: actual contexts and processes or enterprise abstractions.
Assuming that enterprise architecture entails some kind of documentation, changes in actual contexts will induce new representations of objects and processes. At this point, the corresponding changes in models directly reflect actual changes, but the reverse isn’t true. For that to happen, i.e for business objects and processes being drawn from models, the bonds between actual and symbolic descriptions have to be loosened, giving some latitude for the latter to be modified independently of their actual counterpart. As noted above, specialization will do that for local features, but for changes to architecture units being carried on from models, abstractions are a prerequisite.
Emerging Architectures and Grey Matter
As already noted, actual-oriented models describe instances of business objects and processes, while symbolic-oriented ones describe representations, both at instances level (aka concrete descriptions) and types level (aka abstract descriptions). As a corollary, changes in actual-oriented models directly reflect changes in contexts and processes (a); that’s not necessarily the case for symbolic-oriented models which can also take into account intended changes (b) to be translated into concrete targets descriptions at a later stage (c).
Emergence of architectural features is best observed when abstractions (italics) are introduced.
Obviously the room left for conjured up architectural changes is bounded by deterministic factors; nonetheless, thought up new functional features are bound to appear first, if at all, as abstract descriptions, and that’s where emerging architectures are best to be observed.
At that tipping point, and assuming a comprehensive understanding of objective factors (business logic, data structures, operational constraints, etc), the influence of non deterministic factors upon emerging architectures can be probed from two directions: pushing from the past or pulling from the future.
The past will make its mark through existing organizational structures and roles. Knowledge, power bases, and habits are much less pliable than processes and systems. When forced to change they are bound to bend the options, and not necessarily through informed decision making.
Conversely, the assessment of future events, non deterministic by nature, is the result of decision making processes mixing explicit rationale with more obscure collective biases. Those collective leanings will often leave their mark on the way changes in contexts are anticipated, risks weighted, and business objectives defined.
Those non deterministic influences are rooted in some enterprise psyche that steer individual behaviors and collective decisions. Like the hypothetical dark matter conjectured by astronomers in order to explain the mass of the universe, that grey matter of corporate entities is the shadow counterpart of actual systems, necessary to explain their position with regard to enterprises contexts, objectives, and organization.
Emerging Architectures as Systems Epigenetics
Epigenetics can be used as a metaphor to illustrate the relationships between enterprise architectures and environments.
To begin with, enterprises are compared to organisms, systems to organs and cells, and models (including source) to genome coded with the DNA.
According to classical genetics, phenotypes (actual forms and capabilities of organisms) inherit through the copy of genotypes and changes between generations can only be carried out through changes in genotypes. Applied to systems, it would entail that changes can only happen after being programmed into the applications supporting enterprise organization and business processes.
Systems Genetics & Epigenetics
The Extended Evolutionary Synthesis considers the impact of non coded (aka epigenetic) factors on the transmission of the genotype between generations. Applying the same principles to systems would introduce new mechanisms:
Enterprise organization and use of systems could be adjusted to changes in environments prior to changes in coded applications.
Enterprise architects could assess those changes, plan systems evolution, and use abstractions consolidate new designs with legacy applications.
Models would be transformed accordingly.
As for genetics, that understanding of enterprise architectures would put the onus of change on the cells, in that case the plasticity and versatility of applications
Computation Independent Models (CIMs) describe business objects and activities independently of supporting systems.
Platform Independent Models (PIMs) describe systems functionalities independently of platforms technologies.
Platform Specific Models (PSMs) describe systems components as implemented by specific technologies.
Since those layers can be mapped respectively to enterprise, functional, and technical architectures, the question is how to make heads or tails of the driving: should architectures be set along model layers or should models organized according architecture levels.
A Dog Making Head or Tail (Judy Kensley McKie)
In other words, has some typo reversed the original “architecture driven modeling” (ADM) into “model driven architecture” (MDA) ?
Wrong Spelling, Right Concepts
A confusing spelling should not mask the soundness and relevance of the approach: MDA model layers effectively correspond to a clear hierarchy of problems and solutions:
Computation Independent Models describe how business processes support enterprise objectives.
Platform Independent Models describe how systems functionalities support business processes.
Platform Specific Models describe how platforms implement systems functionalities.
MDA layers correspond to a clear hierarchy of problems and solutions
That should leave no room for ambiguity: regardless of the misleading “MDA” moniker, the modeling of systems is meant to be driven by enterprise concerns and therefore to follow architecture divides.
Architectures & Assets Reuse
As it happens, the “MDA” term is doubly confusing as it also blurs the distinction between architectures and processes. And that’s unfortunate because the reuse of architectural assets by development processes is at the core of the MDA framework:
Business objects and logic (CIM) are defined independently of the functional architectures (PIM) supporting them.
Functional architectures (PIM) are defined independently of implementation platforms (PSM).
Technical architecture (PSM) are defined independently of deployment configurations.
MDA layers coincide with categories of reusable assets
Under that perspective the benefits of the “architecture driven” understanding (as opposed to the “model driven” one) appear clearly for both aspects of enterprise governance:
Systems governance can be explicitly and transparently aligned on enterprise organization and business objectives.
Business and development processes can be defined, assessed, and optimized with regard to the reuse of architectural assets.
With the relationship between architectures and processes straightened out and architecture reinstated as the primary factor, it’s possible to reexamine the contents of models used as hinges between them.
Languages & Model Purposes
While engineering is not driven by models but by architectures, models do describe architectures. And since models are built with languages, one should expect different options depending on the nature of artifacts being described. Broadly speaking, three basic options can be considered:
Versatile and general modeling languages like UML can be tailored to different contexts and purposes, along development cycle (requirements, analysis, design) as well as across perspectives (objects, activities, etc) and domains (banking, avionics, etc)
Non specific business modeling languages like BPM and rules-based languages are meant to be introduced upfront, even if their outcome can be used further down the development cycle.
Domain specific languages, possibly built with UML, are also meant to be introduced early as to capture domains complexity. Yet, and contrary to languages like BPM, their purpose is to provide an integrated solution covering the whole development cycle.
Languages: general purpose (blue), process or domain specific (green), or design (brown).
As seen above for reuse and enterprise architecture, a revised MDA perspective clarifies the purpose of models and consequently the language options. With developments “driven by models”, code generation is the default option and nothing much is said about what should be shared and reused, and why. But with model contents aligned on architecture levels, purposes become explicit and modeling languages have to be selected accordingly, e.g:
Domain specific languages for integrated developments (PSM-centered).
BPM for business specifications to be implemented by software packages (CIM-centered).
UML for projects set across system functional architecture (PIM-centered).
The revised perspective and reasoned association between languages and architectures can then be used to choose development processes: projects that can be neatly fitted into single boxes can be carried out along a continuous course of action, others will require phased development models.
Enterprise Architecture & Engineering Processes
Systems engineering has to meet different kinds of requirements: business goals, system functionalities, quality of service, and platform implementations. In a perfect (model driven engineering) world there would be one stakeholder, one architecture, and one time-frame. Unfortunately, requirements are usually set by different stakeholders, governed by different rationales, and subject to changes along different time-frames. Hence the importance of setting forth the primary factors governing engineering processes:
Planning: architecture levels (business, systems, platforms) are governed by different time-frames and engineering projects must be orchestrated accordingly.
Communication: collaboration across organizational units require traceability and transparency.
Governance: decisions across architecture levels and business units cannot be made upfront and options and policies must be assessed continuously.
Those objectives are best supported when engineering processes are set along architecture levels:
Enterprise Architecture & Processes
Requirements: at enterprise level requirements deal with organization and business processes (CIMs). The enterprise requirements process starts with portfolio management, is carried on with systems functionalities, and completed with platforms operational requirements.
Problems Analysis: at enterprise level analysis deals with symbolic representations of enterprise environment, objectives, and activities (PIMs). The enterprise analysis process starts with the consolidation of symbolic representations for objects (power-types) and activities (scenarii), is carried on with functional architectures, and completed with platforms non-functional features. Contrary to requirements, which are meant to convey changes and bear adaptation (dashed lines), the aim of analysis at enterprise level is to consolidate symbolic representations and guarantee their consistency and continuity. As a corollary, analysis at system level must be aligned with its enterprise counterpart before functional (continuous lines) requirements are taken into account.
Solutions Design: at enterprise level design deals with operational concerns and resources deployment. The enterprise design process starts with locations and resources, is carried on with systems configurations, and completed with platforms deployments. Part of it is to be supported by systems as designed (PSMs) and implemented as platforms. Yet, as figured by dashed arrows, operational solutions designed at enterprise level bear upon the design of systems architectures and the configuration of their implementation as platforms.
When engineering is driven by architectures, processes can be devised depending on enterprise concerns and engineering contexts. While that could come with various terminologies, the partitioning principles will remain unchanged, e.g:
Agile processes will combine requirements with development and bypass analysis phases (a).
Projects meant to be implemented by Commercial-Off-The-Shelf Software (COTS) will start with business requirements, possibly using BPM, then carry on directly to platform implementation, bypassing system analysis and design phases (b).
Changes in enterprise architecture capabilities will be rooted in analysis of enterprise objectives, possibly but not necessarily with inputs from business and operational requirements, continue with analysis and design of systems functionalities, and implement the corresponding resources at platform level (c).
Projects dealing with operational concerns will be conducted directly through systems design of and platform implementation (d).
Examples of process templates depending on objectives and contexts.
To conclude, when architecture is reinstated as the primary factor, the MDA paradigm becomes a pivotal component of enterprise architecture as it provides a clear understanding of architecture divides and dependencies on one hand, and their relationship with engineering processes on the second hand.
Even in the thick of perplexing debates, enterprise architects often agree on the meaning of processes and services, the former set from a business perspective, the latter from a system one. Considering the rarity of such a consensus, it could be used to rally the different approaches around a common understanding of some of EA’s objectives.
Process with service (Robert Capa)
A Governing Dilemma
Systems have long been of three different species that communicated but didn’t interbred: information ones were calmly processing business records, industrial ones were tensely controlling physical devices, and embedded ones lived their whole life stowed away within devices. Lastly, and contrary to the natural law of evolution, those three species have started to merge into a versatile and powerful new breed keen to colonize the whole of enterprise ecosystem.
When faced with those pervading systems, enterprises usually waver between two policies, containment or integration, the former struggling to weld and confine all systems within technology boundaries, the latter trying to fragment them and share out the pieces between whichever business units ready to take charge.
While each approach may provide acceptable compromises in some contexts, both suffer critical flaws:
Centralized solutions constrict business opportunities and innovation by putting all concerns under a single unwieldy lid of technical constraints.
Federated solutions rely on integration mechanisms whose increasing size and complexity put the whole of systems integrity and adaptability on the line.
Service oriented architectures may provide a way out of this dilemma by introducing a functional bridge between enterprise governance and systems architectures.
Separation of Concerns
Since governance is meant to be driven by concerns, one should first consider the respective rationales behind business processes and system functionalities, the former driven by contexts and opportunities, and the latter by functional requirements and platforms implementation.
While business processes usually involve various degrees of collaboration between enterprises, their primary objective is to fulfill each one’s very specific agenda, namely to beat the others and be the first to take advantage of market opportunities. That put systems at the cross of a dual perspective: from a business point of view they are designed to provide a competitive edge, but from an engineering standpoint they aim at standards and open infrastructures. Clearly, there is no reason to assume that those perspectives should coincide, one being driven by changes in competitive environments, the other by continuity and interoperability of systems platforms. That’s where Service Oriented Architectures should help: by introducing a level of indirection between business processes and system functionalities, services naturally allow for the mapping of requirements with architecture capabilities.
Services provide a level of indirection between business and system concerns.
Along that reasoning (and the corresponding requirements taxonomy), the design of services would be assessed in terms of optimization under constraints: given enterprise organization and objectives (business requirements), the problem is to maximize the business value of supporting systems (functional requirements) within the limits set by implementation platforms (non functional requirements).
Services & Capabilities
Architectures and processes are orthogonal descriptions respectively for enterprise assets and activities. Looking for the footprint of supporting systems, the first step is to consider how business processes should refer to architecture capabilities :
From a business perspective, i.e disregarding supporting systems and platforms, processes can be defined in terms of symbolic objects, business logic, and the roles of agents, devices, and systems.
The functional perspective looks at the role of supporting systems; as such, it is governed by business objectives and subject to technical constraints.
From a technical perspective, i.e disregarding the symbolic contents of interactions between systems and contexts, operational processes are characterized by the nature of interfaces (human, devices, or other systems), locations (centralized or distributed), and synchronization constraints.
Service oriented architectures typify the functional perspective by factoring out the symbolic contents of system functionalities, introducing services as symbolic hinges between enterprise and system architectures. And when defined in terms of customers, messages, contract, policy, and endpoints, services can be directly mapped to architectures capabilities.
Services are a perfect match for capabilities
Moreover, with services defined in terms of architecture capabilities, the divide between business and operational requirements can be drawn explicitly:
Actual (external) entities and their symbolic counterpart: services only deal with symbolic objects (messages).
Actual entities and their roles: services know nothing about physical agents, only about symbolic customers.
Business logic and processes execution: contracts deal with the processing of symbolic flows, policies deal with operational concerns.
External events and system time: service transactions are ACID, i.e from customer standpoint, they appear to be timeless.
Those distinctions are used to factor out the common backbone of enterprise and system architectures, and as such they play a pivotal role in their alignment.
Anchoring Business Requirements to Supporting Systems
Business processes are meant to met enterprise objectives given contexts and resources. But if the alignment of enterprise and system architectures is to be shielded from changes in business opportunities and platforms implementation, system functionalities will have to support a wide range of shifting business goals while securing the continuity and consistency of shared resources and communication mechanisms. In order to conciliate business changes with system continuity, business processes must be anchored to objects and activities whose identity and semantics are set at enterprise level independently of the part played by supporting systems:
Persistent units (aka business objects): structured information uniquely associated to identified individuals in business context. Life cycle and integrity of symbolic representations must be managed independently of business processes execution.
Functional and execution units: structured activity triggered by an event identified in business context, and whose execution is bound to a set of business objects. State of symbolic representations must be managed in isolation for the duration of process execution.
Services can be defined according persistency and functional units (#)
The coupling between business units (persistent or transient) identified at business level and their system counterpart can be secured through services defined with regard to business processes (customers), business objects (messages), business logic (contract), and business operations (policy).
It must be noted that while services specifications for customers, messages, contracts, and policy are identified at business level and completed at functional level, that’s not the case for endpoints since services locations are set at architecture level independently of business requirements.
Filling out Functional Requirements
Functional requirements are set in two dimensions, symbolic and operational; the former deals with the contents exchanged between business processes and supporting systems with regard to objects, activities and events, or actors; the latter deals with the actual circumstances of the exchanges: locations, interfaces, execution constraints, etc.
Given that services are by nature shared and symbolic, they can only be defined between systems. As a corollary, when functionalities are slated as services, a clear distinction should be maintained between the symbolic contents exchanged between business processes and supporting systems, and the operational circumstances of actual interactions with actors.
Interactions: symbolic and local (a), non symbolic and local (b), symbolic and shared (c).
Depending on the preferred approach for requirements capture, symbolic contents can be specified at system boundaries (e.g use cases), or at business level (e.g users’ stories). Regardless, both approaches can be used to flesh out the symbolic descriptions of functional and persistency units.
From a business process standpoint, users (actors in UML parlance) should not be seen as agents but as the roles agents play in enterprise organization, possibly with constraints regarding the quality of service at entry points. That distinction between agents and roles is critical if functional architectures are to dissociate changes in business processes on one hand, platform implementation on the other hand.
Along that understanding actors triggering use cases (aka primary actors) can represent the performance of human agents as well as devices or systems. Yet, as far as symbolic flows are concerned, only human agents and systems are relevant (devices have no symbolic capabilities of their own). On the receiving end of use cases (aka secondary actors), only systems are to be considered for supporting services.
Mapping Processes to Services (through Use Cases)
Hence, when requirements are expressed through use cases, and assuming they are to be realized (fully or partially) through services:
Persistency and functional units identified by business process would be mapped to messages and contracts.
Business processes would fit service policy.
Use case containers (aka systems) would be registered as service customers.
Alternatively, when requirements are set from users’ stories instead of use cases, persistency and functional units have to be elicited through stories, traced back to business processes, and consolidated into features. Those features will be mapped into system functionalities possibly, but not necessarily, implemented as services.
Mapping Processes to Services (through Users’ Stories)
Hence, while the mapping of business objects and logic respectively to messages and contracts will be similar with use cases and users’ stories, paths will differ for customers and policies:
Given that use cases deal explicitly with interactions at system boundaries, they represent a primary source of requirements for services’ customers and policy. Yet, as services are not supposed to be directly affected by interactions at systems boundaries, those elements would have to be consolidated across use cases.
Users’ stories for their part are told from a business process perspective that may take into account boundaries and actors but are not focused on them. Depending on the standpoint, it should be possible to define customers and policies requirements for services independently of the contingencies of local interactions.
In both cases, it would be necessary to factor out the non symbolic (aka non functional) part of requirements.
Non Functional Requirements: Quality of Service and System Boundaries
Non functional requirements are meant to set apart the constraints on systems’ resources and performances that could be dealt with independently of business contents. While some may target specific business applications, and others encompass a broader range, the aim is to separate business from architecture concerns and allocate the responsibilities (specification, development, and acceptance) accordingly.
Assuming an architecture of services aligned on capabilities, the first step would be to sort operational constraints:
Customers: constraints on usability, customization, response time, availability, …
Messages: constraints on scale, confidentiality, compliance with regulations, …
Contracts: constraints on scale, confidentiality, …
Non functional constraints may cut across services and capabilities
Since constraints may cut across services and capabilities, non functional requirements are not a given but the result of explicit decisions about:
Architecture level: should the constraint be dealt with locally (interfaces), at functional level (services), or at technical level (resources).
Services: when set at functional level, should the constraint be dealt with by business services (e.g domain or activity), or architecture ones (e.g authorization or orchestration).
The alignment of services with architecture capabilities will greatly enhance the traceability and rationality of those decisions.