Functions map inputs with outputs as expected from a performing agent whatever its nature (concrete or abstract) or means (physical, logical, or magical).
They are not to be confused with objectives (which don’t necessarily specify performing agents or detail inputs) nor with activities (which purport to describe concrete execution paths).
Functions and Processes are set along orthogonal dimensions (Simon Fujiwara)
Functions are complete (contrary to objectives) and abstract (contrary to activities) descriptions of what organizations (represented by actors), system architectures (represented by services), or objects (through operations) can do. As such they are akin to interfaces or types, and cannot be instanciated on their own. Processes on the contrary describe how activities are executed, i.e instanciated (#).
Business processes describe sets of execution instances (#). Functions describe what can be expected from enterprise or functional architectures. Business logic describe how the flows are to be processed.
That understanding provides for a modular approach to business processes:
Business processes can be defined with regard to business functions independently of the way they are supported.
Business rules can be managed independently of the way they are applied, by people or systems.
Business logic can be factored out in functions (business or systems) or set within specific processes.
Yet that would not be possible without some modeling across enterprise architecture layers.
Functions & Models
Functions are meant to facilitate reuse across enterprise architectures, which entails descriptions that are clearly and easily accessible: context, modus operandi, expected outcome. Whatever the modeling method(s) in use, it’s safe to assume that different stakeholders across enterprise architectures will pursue different objectives, to be defined with different concepts. If they are to communicate they will need some explicit and unambiguous semantics for the links between processes, functions, and activities:
Functional flows are used between processes and functions (a) or actors (d), or between actors and functions (e).
Composition or aggregates are used to specify where the business logic is to be employed, by functions (b) or by processes (c).
Documentation references (f) are used between unspecified actors and business logic, in case it would performed by people.
Semantics of connectors: functional flows (a,d,e), aggregates (b) and composition (c), and documentation (f).
Finally, the semantics of connectors used between functions will have to be consistent with the one used to connect them to processes and activities.
Combining Functions
Considering that functions are neatly set within the systems modeling realm, one would assume that inheritance and structure connectors can be used to detail and combine them. Yet, since functions cannot be instantiated, some paring down can be applied to their semantics:
Traditional structure connectors are set with regard to identification: bound to structure for composition, set independently otherwise. Since functions have no instances that criterion is irrelevant and the same reasoning goes for composition.
Likewise, since functions have no states to be considered, inheritance of functions can be represented by aggregates.
Functions can be combined at will using only aggregates
As far as functions are concerned, structures as well as inheritance connectors can be fully and soundly replaced by aggregate ones, which could significantly improve the mapping of business processes, activities, and supporting functions.
Whereas financial results are being established on an annual basis, events nowadays unfold within a seamless space/time dimension:
Social networks put long-planned strategies at the mercy of consumers weekly whims often unconcerned by borders or products intents.
Mining of big data may curtail innovative head-starts to a matter of weeks if not days.
On-line (not to mention high frequency) trading put markets sanction at investors fingers tips, and enterprises’ stocks at predators’ claws.
So why should enterprises bother with yearly schedules ?
Evolutionary Arms Race Doesn’t Wait for Saint Sylvester
Business is governed by the same rule as nature, namely the survival of the fittest, and as with biological ecosystems, enterprises’ individual fitness depends on their relationships with others. Hence, as suggested by Lewis Carroll’s Red Queen, the survival of enterprises in their evolutionary arms race doesn’t depend on their absolute speed set against some time-frame, but on the relative one set with regard to competitors. In that case Saint Sylvester could be ignored. Or should it ? Because seasons do play their part in biological races.
From Race to Game
As it happens, business races are becoming more complicated as extensive and ubiquitous information and communication systems redefine the traditional predator-prey casting. With time and space cut down to symbolic dimensions, collaboration and competition can no longer be safely allocated to time-spans and market segments, which entails that the roles of predator and prey can be upended on thespur of a moment and the turn of a switch.
Introducing this kind of option transforms tracks into playgrounds and races into games because there is no point in running without knowing who to run against and who to run with. Adding to the challenge, these playgrounds are moving ones, and so the rules of the game: one week a non-zero sum game, the next a winner-gets-all. That is when calendars make a come-back.
Games come with Boards and Time Scales
Contrary to races, games have comprehensive and detailed rules, or regulations in business parlance. As soon as enterprises start to play with roles they have to meet conditions, follow procedures, and take into account specific constraints; all defined with regard to institutional spaces and calendar time, e.g:
Customers’ behavior is often seasonal.
Regulatory bodies rule within geographical borders and their decisions stand for calendar periods.
M&A have to align with stock markets schedules
So, assuming that institutional time-frames cannot be avoided when strategies are set, Saint Sylvester may provide a practical meter for yearly moves.
When to Board a Shuttle
Enterprises have therefore to set their decision-making with regard to the accelerating pulse of business events on one hand, institutional time-boxes on the other hand. That corresponds to a typical shuttle scheme, with taking a decision being associated with boarding a shuttle.
Along that understanding, and assuming that taking a decision means closing alternative options, boarding a shuttle excludes some of the next one(s), and missing it may be beneficial as well as detrimental. That’s the reasoning behind the Last Responsible Moment principle which, when put to use for periodic decision-making, suggests that there can be as much risk at boarding a shuttle than at missing it.
Whereas enterprise architecture (EA) is a broadly recognized practical concern, there isn’t much of a consensus about it as a discipline. Hence the interest of figuring it out from practice.
Symbolic Maps vs Actual Territories (Marc Riboud)
Be Specific
Compared to the abundance of advice about EA management, there is a dearth of specifics about what is to be managed, apart from the Zachman framework. So the best approach is to begin with actual practice and try to characterize the specifics of what is actually done.
Separate Structures from Processes
Architecture is about shared assets whose life cycle is not limited to specific activities. Hence the need to set apart processes, which have to change depending on business environments and opportunities, and structures (e.g organization and systems) whose life cycle is meant to be set along a corporate time frame.
Separate Symbolic from Actual
To be of any use EA has to rely on some consensus regarding what is to be managed, and by who. That can only be achieved if some distinction is kept between symbolic descriptions (the equivalent of blueprints) of information and processes on one hand, actual objects (e.g legacy) or activities on the other hand. That distinction between maps and territories can be seen as the cornerstone of EA as a discipline.
Add Time Frames
At the end of the day success will be decided by the fruitful combination of enterprise assets (financial, physical, logical, human) and business context and objectives. Defining their respective life cycles and planning the necessary changes could be seen as the primary responsibility of enterprise architects.
Add Responsibilities
Last and least, allocating responsibilities is probably better carried out on a case by case basis depending on each organization and corporate culture.
That’s All
Those few principles may seem unassuming but they provide a sound basis that takes full advantage of what staff and management know about their enterprise resources and practices. And never forget that continuity is a critical factor of EA success.
Frameworks are meant to abet the design and governance of enterprises’ organization and systems, not to add any methodological layer of complexity. If that entry level is to be attained preconditions are to be checked out for comprehensiveness, modularity, clarity of principle, and consistency.
Modularity & Clarity (Cildo Meireles)
Meeting core EA framework requirements will in turn greatly facilitate declarative and iterative approaches to enterprise architecture.
Comprehensiveness
The primary objective of an enterprise framework is to bring under a common management roof different contexts and concerns (business, technical, organizational), and synchronize their respective time-frames. That can only be achieved through an all-inclusive and unified conceptual perspective.
Suggested check: Variants for core concepts like agents or events must be clearly defined at enterprise and system levels; e.g people (agents with identity and organizational status), roles (organization and access to systems), and bots (software agents without identity and organizational status).
Modularity
On one hand enterprise frameworks must deal with strategic issues without being sidetracked by enterprises idiosyncrasies. On the other hand swift and specific adaptations to changing environments should not be hampered by cumbersome procedures or steep learning curve. That can only be achieved by lean and versatile frameworks built from a clear and compact set of architecture artifacts, to be readily extended, specialized or implemented through the enactment of dedicated processes.
Suggested check: How a framework is to further the development of a new business, facilitate the merging of organizations, or support the transition to a new architecture (e.g SOA).
Clarity of Principle
Comprehensiveness and modularity are pointless without a principled backbone supporting incremental changes and a smooth learning curve. For that purpose a clear separation should be maintained between the semantics of the core patterns used to describe architectures and the processes to be carried out for their evolution.
Suggested check: The meaning of primary terms (event, role, activity, etc) should be uniquely and unambiguously defined based on the core framework principles, independently of the processes using them.
Consistency
EA frameworks should be more compass than textbook, drawing clear lines of action before providing details of implementation. Lest architects been lost in compilations of ambiguous or overlapping definitions and rules, core meanings must remain unaffected when put to use across the framework.
Suggested check: Carry out a comprehensive search for a sample of primary terms (e.g event, role, activity, etc.), list and compare the different (if any) definitions, and verify that they can be boiled down to a unique and unambiguous one.
EA & Model Based Systems Engineering
These basic requirements get their full meaning when set in the broader context of EA evolution. Contrary to their IT component, enterprise architectures cannot be reduced to planned designs but grow from a mix of organization, culture, business environments, technology constraints, and strategic planning. As EA evolution is by nature incremental, supporting frameworks should provide for iterative development based on declarative knowledge of their organizational or technical constituents. That could be achieved by combining EA with model based systems engineering.
Given the ubiquity of information and communication technologies on one hand, the falling apart of technical fences between systems, enterprises, and business environments on the other hand, applying the operating system (OS) paradigm to enterprise architectures seems a logical move.
Users and access to services (Queuing at a Post Office in French West Indies)
Borrowing the blueprint of computers operating systems, enterprise operating systems (EOS) would be organized around a kernel managing shared resources (people, hardware and software) and providing services to business, engineering or operational processes.
Gerrymandering & Layers
When IT was neatly fenced behind computer screens managers could keep a clear view on organization, roles, and responsibilities. But with physical hedges replaced by clouded walls, the risk is that IT may appear as the primary constituent of enterprise architecture. Given the lack of formal arguments against what may be a misguided understanding, enterprise architects have to rely on pragmatic answers. Yet, they could prop their arguments by upending the very principles of IT operating systems and restore the right governance footprint.
To begin with, turfs must be reclaimed, and that can be done if the whole of assets and services are layered according to the nature of problems and solutions: business processes (enterprise), supporting functionalities (systems), and technologies (platforms).
EA must separate and federate concerns along architecture layers
Then, reclaiming must also include governance, and for that purpose EOS are to rely on a comprehensive and consistent understanding of assets, people and mechanisms across layers:
Physical assets, including hardware.
Non physical assets, including software.
Agents (identified people with organizational responsibilities) and roles.
Events (changes in the state of objects, processes, or expectations) and activities.
Mimicking traditional OS, that could be achieved with a small and compact conceptual kernel of formal concepts bearing out the definitions of primitives and services for the whole of enterprise processes.
EOS’s Kernel: 12 concepts
A wealth of definitions may be the main barrier to enterprise architecture as a discipline because such profusion necessarily comes with overlaps, ambiguities, and inconsistencies. Hence the benefit of relying on a small set of concepts covering the whole of enterprise systems:
Six for individuals actual (objects, events, processes) and symbolic (surrogates objects, activities, roles) elements.
One for actual (locations) or symbolic (package) containers.
One for the partitioning of behaviors (branch) or surrogates (power type).
Four for actual (channels and synchronization) and symbolic (references and flows) connectors.
Governance calls for comprehensive and consistent semantics
Considering that nowadays business entities (enterprise), services (systems), and software components (technology) share the same distributed world, these concepts have to keep some semantic consistency across layers whatever their lexical avatars. To mention two critical examples, actors (aka roles) and events must be consistently understood by business and system analysts.
Those concepts are used to describe enterprise systems building blocks which can be combined with a small set of well known syntactic operators:
Two types of connectors depending on target: instances (associations) or types (inheritance).
Three types connections for nondescript, aggregation, and composition.
Syntactic operators are meant to be applied independently of targets semantics
Again, Occam’s razor should be the rule: just like semantics are consistently defined across architecture layers, the same syntactic operators are to be uniformly applied to artifacts independently of their semantics.
Kernel’s Functions
Continuing with the kernel analogy, based on a comprehensive and consistent description of resources, the traditional OS functions can be reinterpreted with regard to architecture capabilities implemented across layers:
What: memory of business objects and operations (enterprise), data base logical entities (systems), data base physical records (platforms).
When: business events and processes (enterprise), transaction management (systems), and middleware (platforms).
Where: sites (enterprise), logical processing units (systems), network architecture (platforms).
How: business processes (enterprise), applications (systems), and programs (platforms).
Traceability of Capabilities across architecture layers
That fits with the raison d’être of a kernel which is to combine core functions in order to support the services called by processes.
Services
Still milking the OS analogy, a primary goal of an enterprise kernel is to support a seamless integration of services:
Business driven: the definition of services must be directly and unambiguously associated to business ends and means across enterprise layers.
Traceability: they must ensure the transparency of the tie-ups between organization and processes on one hand, information systems on the other hand.
Plasticity: they must facilitate the alignment of changes in business objectives, organization and supporting systems.
A reasoned way to achieve these objectives is to classify services with regard to the purpose of calling processes:
Business processes deal with the transactions between the enterprise and its environment.
Engineering processes deal with the development of enterprise resources independently of their use.
Operational processes deal with the management of enterprise resources when directly or indirectly used by business processes.
Enterprise Operating System: Layers & Services
That classification can then be crossed with architecture levels:
At enterprise level services are bound to assets to be used by business, engineering, or operational processes.
At systems level services are bound to functions supporting business, engineering, or operational processes.
At platform level services are bound to resources used by business, engineering, or operational processes.
As services will usually rely on different functions across layers, the complexity will be dealt with by kernel primitives and masked behind interfaces.
Services called by processes can combine different functions directly (basic lines) or across layers (dashed lines).
Finally, that organization of services along architecture layers may be aligned with governance levels: strategic for enterprise assets, tactical for systems functionalities, and operational for platforms and resources.
At enterprise level agility can be understood as a mix of versatility and plasticity, the former an attribute of function, the latter of form:
Versatility: enterprise ability to adapt business processes to changing environments without having to change architectures.
Plasticity: enterprise ability to change architectures without affecting business processes.
Agility: Forms & Performances (P. Pénicaud)
Combining versatility and plasticity requires a comprehensive and consistent view of assets (architectures) and modus operandi (processes) organized with regard to change. And that can be achieved with model based systems engineering (MBSE).
MBSE & Change
Agility is all about change, and if enterprise governance is not to be thrown aside decision-making has to be supported by knowledgeable descriptions of enterprise objectives, assets, and organization.
If change management is to be the primary objective, targets must be classified along two main distinctions:
Actual (business context and organization) or symbolic (information systems).
Objects (business entities or system surrogates) or activities (business processes or logic).
Comprehensive and consistent descriptions of actual and symbolic assets (architectures) and modus operandi (processes) with regard to change management.
The two axis determine four settings supporting transparency and traceability:
Dependencies between operational and structural elements.
Dependencies between actual assets and processes and their symbolic representation as systems surrogates.
Versatility and plasticity will be obtained by managing changes and alignments between settings.
Changes & Alignments
Looking for versatility, changes in users’ requirements must be rapidly taken into account by applications (changes from actual to symbolic).
Looking for plasticity, changes in business objectives are meant to be supported by enterprise capabilities (changes from operational to structural).
The challenge is to ensure that both threads can be weaved together into business functions and realized by services (assuming a service oriented architecture).
With the benefits of MBSE, that could be carried out through a threefold alignment:
At users level the objective is to ensure that applications are consistent with business logic and provide the expected quality of service. That is what requirements traceability is meant to achieve.
At system level the objective is to ensure that business functions and features can be directly mapped to systems functionalities. That is what services oriented architectures (SOA) are meant to achieve.
At enterprise level the objective is to ensure that the enterprise capabilities are congruent with its business objectives, i.e that they support its business processes through an effective use of assets. That is what maturity and capability models are meant to achieve.
Versatility comes from users’ requirements, plasticity from architectures capabilities.
That would make agility a concrete endeavor across enterprise, from business users and applications to business processes and architectures capabilities.
As far as systems engineering is concerned, the aim of a feasibility study is to verify that a business solution can be supported by a system architecture (requirements feasibility) subject to some agreed technical and budgetary constraints (engineering feasibility).
Feasibility is about requirements, capabilities with supporting systems (Urs Fisher)
Where to Begin
A feasibility study is based on the implicit assumption of slack architecture capabilities. But since capabilities are set with regard to several dimensions, architectures boundaries cannot be taken for granted and decisions may even entail some arbitrage between business requirements and engineering constraints.
Using the well-known distinction between roles (who), activities (how), locations (where), control (when), and contents (what), feasibility should be considered for supporting functionalities (between business processes and systems) and implementation (between functionalities and platforms):
Feasibility with regard to Systems and Platforms
Depending on priorities, feasibility considerations could look from three perspectives:
Focusing on system functionalities (e.g with use cases) implies that system boundaries are already identified and that the business logic will be defined along with users’ interfaces.
Starting with business requirements puts business domains and logic on the driving seat, making room for variations in system functionalities and boundaries .
Operational requirements (physical environments, events, and processes execution) put the emphasis on a mix of business processes and quality of service, thus making software functionalities a dependent variable.
In any case a distinction should be made between requirements and engineering feasibility, the former set with regard to architecture capabilities, the latter with regard to development resources and budget constraints.
Functional capabilities are defined at system boundaries and if all feasibility options are to be properly explored, architectures capabilities must be understood as a trade-off between the five intrinsic factors e.g:
Security (entry points) and confidentiality (domains).
Compliance with regulatory constraints (domains) and flexibility (activities).
Reliability (processes) and interoperability (locations).
Feasible options must be set against capabilities
Feasible options could then be figured out by points set within the capabilities pentagon. Given metrics on functional requirements, their feasibility under the non functional constraints could be assessed with regard to cross capabilities. And since the same five core capabilities can be consistently defined across enterprise, systems, and platforms layers, requirements feasibility could be assessed without prejudging architecture decisions.
Business Requirements & Architecture Capabilities
One step further, the feasibility of business and operational objectives (the “Why” of the Zachman framework) can be more easily assessed if set on the outer range and mapped to architecture capabilities.
Business Requirements and Architecture Capabilities
Engineering Feasibility & ROI
Finally, the feasibility of business and functional requirements under the constraints set by non functional requirements has to be translated in terms of ROI, and for that purpose the business value has to be compared to the cost of engineering the solution given the resources (people and tools), technical requirements, and budgetary constraints.
ROI assessment mapping business value against functionalities, engineering outlays, and operational costs.
That where the transparency and traceability of capabilities across layers may be especially useful when alternatives and priorities are to be considered mixing functionalities, engineering outlays, and operational costs.
As can be understood from their theoretical basis (Pi-Calculus, Petri Nets, or State Machines), processes are meant to describe the concurrent execution of activities. Assuming that large enterprises have to keep a level of indirection between operations and business logic, it ensues that activities and business logic should be defined independently of the way they are executed by business processes.
Communication Semantics vs Contexts & Contents (G. Winogrand)
For that purpose two basic modeling approaches are available: BPM (Business Process Modeling) takes the business perspective, UML (Unified Modeling Language) takes the engineering one. Yet, each falls short with regard to a pivotal conceptual distinctions: BPM lumps together process execution and business logic, and UML makes no difference between business and software process execution. One way out of the difficulty would be to single out communications between agents (humans or systems), and specify interactions independently of their contents and channels.
Business Process: Communication + Business Logic
That could be achieved if communication semantics were defined independently of domain-specific languages (for information contents) and technical architecture (for communication channels). As it happens, and not by chance, the outcome would neatly coincide with use cases.
Linguistics & Computers Parlance
Business and functional requirements (see Requirements taxonomy) can be expressed with formal or natural languages. Requirements expressed with formal languages, domain-specific or generic, can be directly translated into some executable specifications. But when natural languages are used to describe what business expects from systems, requirements often require some elicitation.
When that challenge is put into a linguistic perspective, two school of thought can be considered, computational or functional.
The former approach is epitomized by Chomsky’s Generative Grammar and its claim that all languages, natural or otherwise, share an innate universal grammar (UG) supporting syntactic processing independently of their meanings. Nowadays, and notwithstanding its initial favor, that “computer friendly” paradigm hasn’t kept much track in general linguistics.
Alternatively, the functionalist school of thought considers linguistics as a general cognitive capability deprived of any autonomy. Along that reasoning there is no way to separate domain-specific semantics from linguistic constructs, which means that requirements complexities, linguistic or business specific, have to be dealt with as a lump, with or without the help of knowledgeable machines.
In between, a third approach has emerged that considers language as a functional system uniquely dedicated to communication and the mapping of meanings to symbolic representations. On that basis it should be possible to separate the communication apparatus (functional semantics) from the complexities of business (knowledge representation).
Processes Execution & Action Languages
Assuming the primary objective of business processes is to manage the concurrent execution of activities, their modeling should be driven by events and their consequences for interactions between business and systems. Unfortunately, that approach is not explicitly supported by BPM or UML.
Contrary to the “simplex” mapping of business information into corresponding data models (e.g using Relational Theory), models of business and systems processes (e.g Petri Nets or State Machines) have to be set in a “duplex” configuration as they are meant to operate simultaneously. Neither BPM nor UML are well equipped to deal with the task:
The BPM perspective is governed by business logic independently of interactions with systems.
Executable UML approaches are centered on software processes execution, and their extensions to action semantics deal essentially on class instances, features value, and objects states.
Such shortcomings are of no serious consequences for stand-alone applications, i.e when what happens at architecture level can be ignored; but overlooking the distinction between the respective semantics of business and software processes execution may critically hamper the usefulness, and even the validity, of models pertaining to represent distributed interactions between users and systems. Communication semantics may help to deal with the difficulty by providing relevant stereotypes and patterns.
Business Process Models
While business process models can (and should) also be used to feed software engineering processes, their primary purpose is to define how concurrent business operations are to be executed. As far as systems engineering is concerned, that will tally with three basic steps:
Dock process interactions (aka sessions) to their business context: references to agents, business objects and time-frames.
Specify interactions: references to context, roles, events, messages, and time related constraints.
Specify information: structures, features, and rules.
Communication with systems: dock to context (1), interactions (2), information (3).
Although modeling languages and tools usually support those tasks, the distinctions remain implicit, leaving users with the choice of semantics. In the absence of explicit guidelines confusion may ensue, e.g between business rules and their use by applications (BPM), or between business and system events (UML). Hence the benefits of introducing functional primitives dedicated to the description of interactions.
Such functional semantics can be illustrated by the well known CRUD primitives for the creation, reading, updating and deletion of objects, a similar approach being also applied to the design of domain specific patterns or primitives supported by functional frameworks. While thick on the ground, most of the corresponding communication frameworks deal with specific domains or technical protocols without factoring out what pertains to communication semantics independently of information contents or technical channels.
Communication semantics should be independent of business specific contents and systems architectures.
But that distinction could be especially productive when applied to business processes as it would be possible to fully separate the semantics of communications between agents and supporting systems on one hand, and the business logic used to process business flows on the other hand.
Communication vs Action Semantics
Languages have developed to serve two different purposes: first as a means to communicate (a capability shared between humans and many primates), and then as a means to represent and process information (a capability specific to humans). Taking a leaf from functional approaches to linguistics, it may be possible to characterize messages with regard to action semantics, more precisely associated activity and attendant changes:
No change: messages relating to passive (objects) or active (performed activity) states.
Change: messages relating to achievement (no activity) or accomplishment (attendant on performed activity).
Basic action semantics for interactions between users (BPM) and systems (UML)
Communication semantics can then be fully rounded off by adding changes in agents’ expectations to the ones in the states of objects and activities, all neatly modeled with state machines.
Communication semantics: changes in expectations, objects, and activities.
It must also be noted that factoring out the modeling of agents’ expectations with regard to communications is in line with the principles of service oriented architectures (SOA).
Additionally, transitions would have to be characterized by:
Time-frame: single or different.
Address space : single or different.
Mode: information, request for information, or request for action.
Organizing business processes along those principles, would enable the alignment of BPMs with their UML counterpart.
Use Cases as Bridges between BPM & UML
Use cases can be seen as the default entry point for UML modeling, and as such they should provide a bridge from business process models. That can be achieved by if use cases are understood as a combination of interactions and activities, the former obtained from communications as defined above, the latter from business logic.
Use Cases are best understood as a combination of interactions and business logic
One step further, the distinction between communication semantics and business contents could be used to align models respectively to systems and software architectures:
Communication semantic constructs could be mapped to systems architectures, ideally to services oriented ones.
Before assessing criteria for long-term commitments, initial selection of a method or framework for systems engineering should consider four basic principles: continuity, duality, parsimony, and artifacts precedence.
Picking a Framework (Spring Garden, Tehran)
Continuity
Modus operandi are built on people understandings, practices, and skills that cannot be changed as easily as tools. In other words “big bang” solutions should be avoided when considering changes in systems governance and software engineering processes.
Duality
While any solution will necessarily entail collaboration between business and systems analysts, they belong to realms with inbuilt differences of concerns and culture. Assuming that the divide can be sewed up by canny procedures is tantamount to ignore the very purpose of the framework.
Parsimony
According to Occam’s Razor, when faced with competing options, the one with the fewest assumptions should be selected. That principle is especially critical when dealing with organizational options that cannot be easily reversed or even adjusted. Hence, when alternative engineering processes are considered, a simple and robust solution should be selected as a default option, and extensions added for specific projects if and when needed.
Artifacts Precedence
Assuming that enterprise architecture entails the continuity, perennity and reuse of shared descriptions and understandings, symbolic artifacts can be seen as the corner-stone of the whole undertaking. As a corollary, and whatever the framework or methodology, the core of managed artifacts should be clearly defined before considering the processus that will use them.
Enterprise architecture being a nascent discipline, its boundaries and categories of concerns are still in the making. Yet, as blurs on pivotal concepts are bound to jeopardize further advances, clarification is called upon for the concept of “capability”, whose meaning seems to dither somewhere between architecture, function and process.
Jumping capability of a four-legged structure (Edgard de Souza)
Hence the benefits of applying definition guidelines to characterize capability with regard to context (architectures) and purpose (alignment between architectures and processes).
Context: Capability & Architecture
Assuming that a capability describes what can be done with a resource, applying the term to architectures would implicitly make them a mix of assets and mechanisms meant to support processes. As a corollary, such understanding would entail a clear distinction between architectures on one hand and supported processes on the other hand; that would, by the way, make an oxymoron of the expression “process architecture”.
On that basis, capabilities could be originally defined independently of business specificity, yet necessarily with regard to architecture context:
Business capabilities: what can be achieved given assets (technical, financial, human), organization, and information structures.
Systems capabilities: what kind of processes can be supported by systems functionalities.
Platforms capabilities: what kind of functionalities can be implemented.
Well established concepts are used to describe architecture capabilities
Taking a leaf from the Zachman framework, five core capabilities can be identified cutting across those architecture contexts:
Who: authentication and authorization for agents (human or otherwise) and roles dealing with the enterprise, using system functionalities, or connecting through physical entry points.
What: structure and semantics of business objects, symbolic representations, and physical records.
How: organization and versatility of business rules.
Where: physical location of organizational units, processing units, and physical entry points.
When: synchronization of process execution with regard to external events.
Being set with regard to architecture levels, those capabilities are inherently holistic and can only pertain to the enterprise as a whole, e.g for bench-marking. Yet that is not enough if the aim is to assess architectures capabilities with regard to supported processes.
Purpose: Capability vs Process
Given that capabilities describe architectural features, they can be defined independently of processes. Pushing the reasoning to its limit, one could, as illustrated by the table above, figure a capability without even the possibility of a process. Nonetheless, as the purpose of capabilities is to align supporting architectures and supported processes, processes must indeed be introduced, and the relationship addressed and assessed.
First of all, it’s important to note that trying to establish a direct mapping between capabilities and processes will be self-defeating as it would fly in the face of architecture understood as a shared construct of assets and mechanisms. Rather, the mapping of processes to architectures is best understood with regard to architecture level: traceable between requirements and applications, designed at system level, holistic at enterprise level.
Alignment with processes is mediated by architecture complexity.
Assuming a service oriented architecture, capabilities would be used to align enterprise and system architectures with their process counterparts:
Holistic capabilities will be aligned with business objectives set at enterprise level.
Services will be aligned with business functions and designed with regard to holistic capabilities.
Services are a perfect match for capabilities
Moreover, with or without service oriented architectures, that approach could still be used to map functional and non functional requirements to architectures capabilities.
Functional requirements are defined with regard to business processes, non functional ones with regard to system capabilities.
The alignment of non-functional requirements with architectures capabilities can be seen as a key factor for enterprise architectures as it draws the line between what can be owned and managed by business units and what must be shared at enterprise level. It must also be noted that non-functional requirements should not be seen as a one-fits-all category but be defined by the footprint of business requirements on technical architecture.