From Processes to Services

Objective

Even in the thick of perplexing debates, enterprise architects often agree on the meaning of processes and services, the former set from a business perspective, the latter from a system one. Considering the rarity of such a consensus, it could be used to rally the different approaches around a common understanding of some of EA’s objectives.

BOB1951003W00007/ICP912
Process with service (Robert Capa)

A Governing Dilemma

Systems have long been of three different species that communicated but didn’t interbred: information ones were calmly processing business records, industrial ones were tensely controlling physical devices, and embedded ones lived their whole life stowed away within devices. Lastly, and contrary to the natural law of evolution, those three species have started to merge into a versatile and powerful new breed keen to colonize the whole of enterprise ecosystem.

When faced with those pervading systems, enterprises usually waver between two policies, containment or integration, the former struggling to weld and confine all systems within technology boundaries, the latter trying to fragment them and share out the pieces between whichever business units ready to take charge.

While each approach may provide acceptable compromises in some contexts, both suffer critical flaws:

  • Centralized solutions constrict business opportunities and innovation by putting all concerns under a single unwieldy lid of technical constraints.
  • Federated solutions rely on integration mechanisms whose increasing size and complexity put the whole of systems integrity and adaptability on the line.

Service oriented architectures may provide a way out of this dilemma by introducing a functional bridge between enterprise governance  and systems architectures.

Separation of Concerns

Since governance is meant to be driven by concerns, one should first consider the respective rationales behind business processes and system functionalities, the former driven by contexts and opportunities, and the latter by functional requirements and platforms implementation.

While business processes usually involve various degrees of collaboration between enterprises, their primary objective is to fulfill each one’s very specific agenda, namely to beat the others and be the first to take advantage of market opportunities. That put systems at the cross of a dual perspective: from a business point of view they are designed to provide a competitive edge, but from an engineering standpoint they aim at standards and open infrastructures. Clearly, there is no reason to assume that those perspectives should coincide, one  being driven by changes in competitive environments, the other by continuity and interoperability of systems platforms. That’s where Service Oriented Architectures should help: by introducing a level of indirection between business processes and system functionalities, services naturally allow for the mapping of requirements with architecture capabilities.

bp2sa_layers
Services provide a level of indirection between business and system concerns.

Along that reasoning (and the corresponding requirements taxonomy), the design of services would be assessed in terms of optimization under constraints: given enterprise organization and objectives (business requirements), the problem is to maximize the business value of supporting systems (functional requirements) within the limits set by implementation platforms (non functional requirements).

Services & Capabilities

Architectures and processes are orthogonal descriptions respectively for enterprise assets and activities. Looking for the footprint of supporting systems, the first step is to consider how business processes should refer to architecture capabilities :

  • From a business perspective, i.e disregarding supporting systems and platforms, processes can be defined in terms of symbolic objects, business logic, and the roles of agents, devices, and systems.
  • The functional perspective looks at the role of supporting systems; as such, it is governed by business objectives and subject to technical constraints.
  • From a technical perspective, i.e disregarding the symbolic contents of interactions between systems and contexts, operational processes are characterized by the nature of interfaces (human, devices, or other systems), locations (centralized or distributed), and synchronization constraints.

Service oriented architectures typify the functional perspective by factoring out the symbolic contents of system functionalities, introducing services as symbolic hinges between enterprise and system architectures. And when defined in terms of customers, messages, contract, policy, and endpoints, services can be directly mapped to architectures capabilities.

Services are a perfect match for capabilities

Moreover, with services defined in terms of architecture capabilities, the divide between business and operational requirements can be drawn explicitly:

  • Actual (external) entities and their symbolic counterpart: services only deal with symbolic objects (messages).
  • Actual entities and their roles: services know nothing about physical agents, only about symbolic customers.
  • Business logic and processes execution: contracts deal with the processing of symbolic flows, policies deal with operational concerns.
  • External events and system time: service transactions are ACID, i.e from customer standpoint, they appear to be timeless.

Those distinctions are used to factor out the common backbone of enterprise and system architectures, and as such they play a pivotal role in their alignment.

Anchoring Business Requirements to Supporting Systems

Business processes are meant to met enterprise objectives given contexts and resources. But if the alignment of enterprise and system architectures is to be shielded from changes in business opportunities and platforms implementation, system functionalities will have to support a wide range of shifting business goals while securing the continuity and consistency of shared resources and communication mechanisms. In order to conciliate business changes with system continuity, business processes must be anchored to objects and activities whose identity and semantics are set at enterprise level independently of the part played by supporting systems:

  • Persistent units (aka business objects): structured information uniquely associated to identified individuals in business context. Life cycle and integrity of symbolic representations must be managed independently of business processes execution.
  • Functional and execution units: structured activity triggered by an event identified in business context, and whose execution is bound to a set of business objects. State of symbolic representations must be managed in isolation for the duration of process execution.
Services can be defined according persistency and functional units (#)

The coupling between business units (persistent or transient) identified at business level and their system counterpart can be secured through services defined with regard to business processes (customers), business objects (messages), business logic (contract), and business operations (policy).

It must be noted that while services specifications for customers, messages, contracts, and policy are identified at business level and completed at functional level, that’s not the case for endpoints since services locations are set at architecture level independently of business requirements.

Filling out Functional Requirements

Functional requirements are set in two dimensions, symbolic and operational; the former deals with the contents exchanged between business processes and supporting systems with regard to objects, activities and events, or actors; the latter deals with the actual circumstances of the exchanges: locations, interfaces, execution constraints, etc.

Given that services are by nature shared and symbolic, they can only be defined between systems. As a corollary, when functionalities are slated as services, a clear distinction should be maintained between the symbolic contents exchanged between business processes and supporting systems, and the operational circumstances of actual interactions with actors.

Interactions: symbolic and local (a), non symbolic and local (b), symbolic and shared (c).
Interactions: symbolic and local (a), non symbolic and local (b), symbolic and shared (c).

Depending on the preferred approach for requirements capture, symbolic contents can be specified at system boundaries (e.g use cases), or at business level (e.g users’ stories). Regardless, both approaches can be used to flesh out the symbolic descriptions of functional and persistency units.

From a business process standpoint, users (actors in UML parlance) should not be seen as agents but as the roles agents play in enterprise organization, possibly with constraints regarding the quality of service at entry points. That distinction between agents and roles is critical if functional architectures are to dissociate changes in business processes on one hand, platform implementation on the other hand.

Along that understanding actors triggering use cases (aka primary actors) can represent the performance of human agents as well as devices or systems. Yet, as far as symbolic flows are concerned, only human agents and systems are relevant (devices have no symbolic capabilities of their own). On the receiving end of use cases (aka secondary actors), only systems are to be considered for supporting services.

Mapping Processes to Services (through Use Cases)

Hence, when requirements are expressed through use cases, and assuming they are to be realized (fully or partially) through services:

  • Persistency and functional units identified by business process would be mapped to messages and contracts.
  • Business processes would fit service policy.
  • Use case containers (aka systems) would be registered as service customers.

Alternatively, when requirements are set from users’ stories instead of use cases, persistency and functional units have to be elicited through stories, traced back to business processes, and consolidated into features. Those features will be mapped into system functionalities possibly, but not necessarily, implemented as services.

Mapping Processes to Services (through Users’ Stories)

Hence, while the mapping of business objects and logic respectively to messages and contracts will be similar with use cases and users’ stories, paths will differ for customers and policies:

  • Given that use cases deal explicitly with interactions at system boundaries, they represent a primary source of requirements for services’ customers and policy. Yet, as services are not supposed to be directly affected by interactions at systems boundaries, those elements would have to be consolidated across use cases.
  • Users’ stories for their part are told from a business process perspective that may take into account boundaries and actors but are not focused on them. Depending on the standpoint, it should be possible to define customers and policies requirements for services independently of the contingencies of local interactions.

In both cases, it would be necessary to factor out the non symbolic (aka non functional) part of requirements.

Non Functional Requirements: Quality of Service and System Boundaries

Non functional requirements are meant to set apart the constraints on systems’ resources and performances that could be dealt with independently of business contents. While some may target specific business applications, and others encompass a broader range, the aim is to separate business from architecture concerns and allocate the responsibilities (specification, development, and acceptance) accordingly.

Assuming an architecture of services aligned on capabilities, the first step would be to sort operational constraints:

  • Customers: constraints on usability, customization, response time, availability, …
  • Messages: constraints on scale, confidentiality, compliance with regulations, …
  • Contracts: constraints on scale, confidentiality, …
  • Policy: availability, reliability, maintenance, …
  • Endpoints: costs, maintenance, security, interoperability, …
BP2SOA_QoS
Non functional constraints may cut across services and capabilities

Since constraints may cut across services and capabilities, non functional requirements are not a given but the result of explicit decisions about:

  • Architecture level: should the constraint be dealt with locally (interfaces), at functional level (services), or at technical level (resources).
  • Services: when set at functional level, should the constraint be dealt with by business services (e.g domain or activity), or architecture ones (e.g authorization or orchestration).

The alignment of services with architecture capabilities will greatly enhance the traceability and rationality of those decisions.

A Simple Example

This example is based on the Purchase Order case published with the OMG specifications: http://www.omg.org/spec/SoaML/1.0/Beta2/PDF/

A simple purchase order process analyzed in terms of service customers, messages and entities (#), contracts, and policy (aka choreography)
A simple purchase order process analyzed in terms of service customers, messages and entities (#), contracts, and policy (aka choreography)

Further Reading

External References

Modeling Languages: Differences Matter

“Since words are only names for things, it would be more convenient for all men to carry about them such things as were necessary to express a particular business they are to discourse on.”

Jonathan Swift, Gulliver’s Travels

Objective

Modeling languages are meant to support the description of contexts and concerns from specific standpoints. And because those different perspectives have to be mapped against shared references, language must also provide constructs for ironing out variants and factoring out constants.

Digging or Ironing out differences (Erwitt Elliott)
Ironing out differences in features and behaviors (Erwitt Elliott)

Yet, while most languages generally agree on basic definitions of objects and behaviors, many distinctions are ignored or subject to controversial understanding; such shortcomings might be critical when architecture capabilities are concerned:

  • Actual entities and their symbolic counterpart.
  • Actual entities and their roles.
  • Business logic and business operations
  • External events and system time.

Capabs_L1
Architecture Capabilities

As those distinctions set the backbone of functional architectures, languages should be assessed according their ability to express them unambiguously using the least possible set of constructs.

Business objects vs Symbolic Surrogates

As far as symbolic systems are concerned, the primary purpose of models is to distinguish between targets and representations. Hence the first assessment yardstick: modeling languages must support a clear and unambiguous distinction between objects and behaviors of concern on one hand, symbolic system surrogates on the other hand.

cc
A primer on representation: objects of concerns and surrogates

Yet, if objects and behaviors are to be consistently and continuously managed, modeling languages must also provide common constructs mapping identities and structures of actual entities to their symbolic counterpart.

Agents vs Roles

Given that systems are built to support business processes, modeling languages should enable a clear distinction between physical entities able to interact with systems on one hand, roles as defined by enterprise organization on the other hand.

ccc
Agents and Roles

In that case the mapping of actual entities to systems representations is not about identities but functionalities: a level of indirection has to be introduced between enterprise organization and system technology because the same roles can be played,simultaneously or successively, by people, devices, or other systems.

Business logic vs Business operations

Just like actual objects are not to be confused with their symbolic description, modeling languages must make a clear distinction between business logic and processes execution, the former defining how to process symbolic objects and flows, the latter with the coupling between process execution and changes in actual contexts. That distinction is of a particular importance if business and organizational decisions are to be made independently of supporting systems technology.

Pull vs Push Rule Management
Business logic and operational contingencies

Language constructs must also support the consolidation of functional and operational units, the former being defined by integrity constraints on symbolic flows and roles authorizations on objects and operations, the latter taking into account synchronization constraints between the state of actual contexts and the state of their system symbolic counterpart. And for that purpose languages must support the distinction between external and internal events.

External vs Internal Events

Paraphrasing Albert Einstein, time is what happens between events. As a corollary, external time is what happens between context events, and internal time is what happens between system ones. While both can coincide for single systems and locations, no such assumption should be made for distributed systems set in different locations. In that case modeling language should support a clear distinction between external events signalling changes set in actual locations, and internal events signalling changes affecting system surrogates.

Time scales can be design on purpose
Time scales are set by events and concerns

Along that perspective synchronization is to be achieved through the consolidation of time scales. For single locations that can be done using system clocks, across distributed locations the consolidation will entail dedicated time frames and mechanisms set in reference to some initial external event.

Conclusion: How to share differences across perspectives

Somewhat paradoxically, multiple modeling languages erase differences by paring down shared descriptions to some uniform lump that can be understood by all. Conversely, agreeing on a set of distinctions that should be supported by every language could provide an antidote to the Babel syndrome.

That approach can be especially effective for the alignment of enterprise and systems architectures as the four distinctions listed above are equally meaningful in both perspectives.

Further reading

Tests in Driving Seats

Objective

Contrary to its manufacturing cousin, a long time devotee of preventive policies, software engineering is still ambivalent regarding the benefits of integrating quality management with development itself. That certainly should raise some questions, as one would expect the quality of symbolic artifacts to be much easier to manage than the one of their physical counterparts, if for no other reason than the former has to check  symbolic outcomes against symbolic specifications while the latter must also to overcome the contingencies of non symbolic artifacts.

ccc
Walking Quality Hall (E. Erwitt)

Thanks to agile approaches, lessons from manufacturing are progressively learned, with lean and just-in-time principles making tentative inroads into software engineering. Taking advantage of the homogeneity of symbolic development flows,  agile methods have forsaken phased processes in favor of iterative ones, making a priority of continuous and value driven deliveries to business users. Instead of predefined sequences of dedicated tasks, products are developed through iterations regrouping definition, building, and acceptance into the same cycles. That push differentiated documentation and models on back seats and may also introduce a new paradigm by putting tests on driving ones.

From Phased to Iterative Tests Management

Traditional (aka phased) processes follow a corrective strategy: tests are performed according a Last In First Out (LIFO) framework, for components (unit tests), system (integration), and business (acceptance). As a consequence, faults in functional architecture risk being identified after components completion, and flaws in organization and business processes may not emerge before the integration of system functionalities. In other words, the faults with the more wide-ranging consequences may be the last to be detected.

Phased and Iterative approaches to tests
Phased and Iterative approaches to tests

Iterative approaches follow a preemptive strategy: the sooner artifacts are tested, the better. The downside is that without differentiated and phased objectives, there is a question mark on the kind of specifications against which software products are to be tested; likewise, the question is how results are to be managed across iteration cycles, especially if changing requirements are to be taken into account.

Looking for answers, one should first consider how requirements taxonomy can support tests management.

Requirements Taxonomy and Tests Management

Whatever the methods or forms (users’ stories, use case, functional specifications, etc), requirements are meant to describe what is expected from systems, and as such they have two main purposes: (1) to serve as a reference for architects and engineers in software design and (2) to serve as a reference for tests and acceptance.

With regard to those purposes, phased development models have been providing clearly defined steps (e.g requirements, analysis, design, implementation) and corresponding responsibilities. But when iterative cycles are applied to progressively refined requirements, those “facilities” are no longer available. Nonetheless, since tests and acceptance are still to be performed, a requirements taxonomy may replace phased steps as a testing framework.

Taxonomies being built on purpose, one supporting iterative tests should consider two criteria, one driven by targeted contents, the other by modus operandi:

With regard to contents, requirements must be classified depending on who’s to decide: business and functional requirements are driven by users’ value and directly contribute to business experience; non functional requirements are driven by technical considerations. Overlapping concerns are usually regrouped as quality of service.

ccc
Requirements with regard to Acceptance.

That distinction between business and architecture driven requirements is at the root of portfolio management: projects with specific business stakeholders are best developed with the agile development model, architecture driven projects set across business domains may call for phased schemes.

That requirements taxonomy can be directly used to build its testing counterpart. As developed by D. Leffingwell (see selected readings), tests should also be classified with regard to their modus operandi, the distinction being between those that can be performed continuously along development iterations and those that are only relevant once products are set within their technical or business contexts. As it happens, those requirements and tests classifications are congruent:

  • Units and component tests (Q1) cover technical requirements and can be performed on development artifacts independently of their functionalities.
  • Functional tests (Q2) deal with system functionalities as expressed by users (e.g with stories or use cases), independently of operational or technical considerations.
  • System acceptance tests (Q3) verify that those functionalities, when performed at enterprise level, effectively support business processes.
  • System qualities tests (Q4) verify that those functionalities, when performed at enterprise level, are supported by architecture capabilities.

Tests Matrix for target and MO (adapted from D. Leffingwell)
Tests Matrix for target and MO (adapted from D. Leffingwell).

Besides the specific use of each criterion in deciding who’s to handle tests, and when, combining criteria brings additional answers regarding automation: product acceptance should be performed manually at business level, preferably by tools at system level; tests performed along development iterations can be fully automated for units and components (black-box), but only partially for functionalities (white-box).

That tests classification can be used to distinguish between phased and iterative tests: the organization of tests targeting products and systems from business (Q3) or technology (Q4) perspectives is clearly not supposed to be affected by development models, phased or iterative, even if resources used during development may be reused. That’s not the case for the organization of the tests targeting functionalities (Q2) or components (Q1).

Iterative Tests

Contrary to tests aiming at products and systems (Q3 and Q4), those performed on development artifacts cannot be set on fixed and well-defined specifications: being managed within iteration cycles they must deal with moving targets.

Unit and components tests (Q1) are white-box operations meant to verify the implementation of functionalities; as a consequence:

  • They can be performed iteratively on software increments.
  • They must take into account technical requirements.
  • They must be aligned on the implementation of tested functionalities.

ccccc
Iterative (aka development) tests for technical (Q1) and functional (Q2) requirements.

Hence, if unit and component tests are to be performed iteratively, (1) they must be set against features and, (2) functional tests must be properly documented and available for reuse.

Functional tests (Q2) are black-box operations meant to validate system behavior with regard to users’ expectations; as a consequence:

  • They can be performed iteratively on software increments.
  • They don’t have to take into account technical requirements.
  • They must be aligned on business requirements (e.g users’ stories or use cases).

Assuming (see previous post) a set of stories (a,b,c,d) identified by alternative paths built from features (f1…5), functional tests (Q2) are to be defined and performed for each story, and then reused to test the implementation of associated features (Q1).

cccc
Functional tests are set along stories, units and components tests are set along features.

At that point two questions must be answered:

  • Given that stories can be changed, expanded or refined along development iterations, how to manage the association between requirements and functional tests.
  • Given that backlogs can be rearranged along development cycles according to changing priorities, how to update tests, manage traceability, and prevent regression.

With model-driven approaches no longer available, one should consider a mirror alternative, namely test-driven development.

Tests Driven Development

Test driven development can be seen as a mirror image of model driven development, a somewhat logical consequence considering the limited role of models in agile approaches.

The core of agile principles is to put the definition, building and acceptance of software products under shared ownership, direct collaboration, and collective responsibility:

  • Shared ownership: a project team groups users and developers and its first objective is to consolidate their respective concerns.
  • Direct collaboration: decisions are taken by team members, without any organizational mediation or external interference.
  • Collective responsibility: decisions about stories, priorities and refinements are negotiated between team members from both sides of the business/system (or users/developers) divide.

Assuming those principles are effectively put to work, there seems to be little room for organized and persistent documentation, as users’ stories are meant to be developed, and products released, in continuity, and changes introduced as new stories.

With such lean and just-in-time processes, documentation, if any, is by nature transient, falling short as a support of test plans and results, even when problems and corrections are formulated as stories and managed through backlogs. In such circumstances, without specifications or models available as development handrails, could that be achieved by tests ?

ccc
Given the ephemeral nature of users’ stories, functional tests should take the lead.

To begin with, users’ stories have to be reconsidered. The distinction between functional tests on one hand, unit and component tests on the other hand, reflects the divide between business and technical concerns. While those concerns may be mixed in users’ stories, they are progressively set apart along iteration cycles. It means that users’ stories are, by nature, transitory, and as a consequence cannot be used to support tests management.

The case for features is different. While they cannot be fully defined up-front, features are not transient: being shared by different stories and bound to system functionalities they are supposed to provide some continuity. Likewise, notwithstanding their changing contents, users’ stories should be soundly identified by solution paths across problems space.

Paths and Features can be identified consistently along iteration cycles
Paths and Features can be identified consistently along iteration cycles.

That can provide a stable framework supporting the management of development tests:

  • Unit tests are specified from crosses between solution paths (described by stories or scenarii) and features.
  • Functional tests are defined by solution paths and built from unit tests associated to the corresponding features.
  • Component tests are defined by features and built by the consolidation of unit tests defined for each targeted feature according to technical constraints.

The margins support continuous and consistent identification of functional and component tests whose contents can be extended or updated through changes made to unit tests.

One step further, and tests can even be used to drive iteration cycles: once features and solution paths soundly identified, there is no need to swell backlogs with detailed stories whose shelf life will be limited. Instead, development processes would get leaner if extensions and refinements could be directly expressed as unit tests.

System Quality and Acceptance Tests

Contrary to development tests which are applied iteratively to programs, system tests are applied to released products and must take into account requirements that cannot be directly or uniquely attached to users stories, either because they cannot be expressed from a business perspective or because they are shared concerns and best described as features.  Tests for those requirements will be consolidated with development ones into system quality and acceptance tests:

  • System Quality Tests deal with performances and resources from the system management perspective. As such they will combine component and functional tests in operational configurations without taking into account their business contents.
  • System  Acceptance Tests deal with the quality of service from the business process perspective. As such they will perform functional tests in operational configurations taking into account business contents and users’ experience.

Development Tests are to be consolidated into Product and System Acceptance Tests
Development Tests are to be consolidated into Product and System Acceptance Tests.

Requirements set too early and quality checks performed too late are at the root of phased processes predicaments, and that can be fixed with a two-pronged policy: a preemptive policy based upon a requirements taxonomy organizing problem spaces according concerns business value, system functionalities, components designs, platforms configuration; a corrective policy driven by the exploration of solution paths, with developments and releases driven by quality concerns.

Tests & Framework

Insofar as large and complex enterprise architectures are concerned, it’s safe to assume that different development models (agile or phased) and tests policies (unit, system, acceptance, …) will have to be cohabit, and that would not be possible without an architecture framework:

  • Development or unit tests are defined at platform level and applied to software components.
  • Integration or system tests are defined at system level and built from tested components.
  • Acceptance tests are defined at enterprise level and built from tested functionalities.

Pentaheds-tests
Tests should be aligned on architecture layers

On a broader perspective such a framework is to provide the foundation of enterprise architecture workflows.

Further Reading

External Links

 

Spaces, Paths, Paces (Part 1)

Objective

Development processes start with requirements and wind up in code; in between there isn’t much of a consensus among the software engineering community about how to define the scope (spaces), how to sequence the tasks (paths), and how to time deliveries (paces). On one side of the debate phased approaches hope for fixed spaces and ordered paths but often get entangled in moving lines. On the other side of the debate agile teams try to find their space by increments but risk losing the path while still on their way.

ibrahim-mohammed-el-salahi-1346719712_b
Maze revisited: finding a path while building the space (Ibrahim El-Salahi)

This lack of agreed upon concepts and principles entrusts personal skills and best practices as primary success factors. Conversely, that could explain the rate of failures for software projects, significantly higher than for “hard” engineering ones; given the quasi absence of physical constraints, the opposite would have been expected, which would suggest some critical intrinsic flaw.

With the “benefits” of hindsight and agile assessment of waterfall flaws, the focus has been put on fixed scope and schedule, in particular with regard to requirements and quality management:

  • Fixed requirements set upfront: since there is an inverse relationship between the level of details and the reliability and stability of requirements, staking the whole project on requirements fully defined at such an early time is arguably a very hazardous policy.
  • Quality as an afterthought: given that finding defects is not very gratifying when undertaken in isolation, delegating the task will offer few guarantees if not associated with rewards commensurate to findings; moreover, quality as a detached concern may easily turn into a collateral damage when set along mounting costs and scheduling constraints. Alternatively, quality checks may change into a more positive endeavor when conducted as an intrinsic part of development.

Agile answer to those failings has been to conduct specifications, development, and quality assurance into integrated iterations. As a consequence, the definition of scope becomes a byproduct of development cycles, with requirements itemized as features in order to be developed progressively. Moreover, with specifications and schedules managed dynamically, timetables become impracticable and deliveries can only be carried out by shuttles.

The agile “reformation” has open new perspectives and beget many fruitful practices, and the objective here is to see how those approaches of scope and schedule can be reformulated within the perspective of architecture layers. This part examines the congruence between the alternate flows of use cases and the backlog of users’ stories, and considers their complementarity as path-finders. The second part will focus on the role of time-boxes as pace-makers and the benefits for quality assurance.

Architectures and Projects Scope

Whatever the compass, agile or phased, projects footprint can be set across three architecture layers:

  • Enterprise architectures describe business environments and objectives, resources and regulatory constraints.
  • System architectures describe enterprises in terms of functional entities of human agents and physical and software assets.
  • Technical architectures describe the platforms supporting functional entities.

ccc
Architecture layers vs Processes

Projects are meant to carry out changes within architectures initiated by business, engineering, or services management processes:

  • Business processes are defined by business environment and objectives. Changes may have to deal with domains and activities, organization and supported operations, and quality of service as experienced by users.
  • Engineering processes focus on the development of software systems supporting business processes: business domains and applications, system functionalities, platform implementations.
  • Services management stand between engineering deliveries and operational concerns: location of assets, access to services, releases deployment, and systems configuration.

While development projects may (and will usually) cross architecture layers, their roots and stakes should nonetheless be clearly positioned if projects are to be planned within the respective time-spans, governed by the relevant authority, and their products accepted by the right stakeholders.

Development Project, from Requirements to Deployments
Development Project, from Requirements to Deployments

That put projects governance at crossroads between (a) business objectives set by market opportunities, (b) the deployment of features into functional architectures, and (c) the deployment of releases according to changes in technical architecture. With phased developments, scope and schedules are fixed upfront, which means that the business layer forces its time-frame over the ones of system and technical layers, which may introduce frictions regarding scope as well as quality:

  • With regard to scope, frictions stem from features and schedules set fully and definitively at enterprise level, independently of any feedback from functional and technical layers.
  • With regard to quality, frictions stem from tests performed at technical layer when scope and schedules can no longer be revised in case of negative outcomes.

The consequences are all too easy to observe, with business needs partially satisfied and software quality sacrificed. Hence the need of a balanced approach that would consolidate the different maps and time-frames in order to minimize frictions between layers.

Mapping Project Scope

Projects scope can be described along two dimensions, one set by business logic, the other by system functionalities:

  • First, one have to circumscribe the business variants to be taken into consideration. For that purpose the project footprint, first introduced as users’ stories, will have to be documented by activity or business process diagrams.
  • Then, the project scope will have to mark out the subset of business requirements to be supported by system functionalities. That will usually be done with use cases describing interactions between system and users.

Complementary descriptions of projects footprints: use cases (interactions between users and system) and activity diagrams (business logic).
Complementary descriptions of projects footprints: use cases (interactions between users and system) and activity diagrams (business logic).

That makes those descriptions both orthogonal and complementary: orthogonal because use cases cut across activity diagrams, complementary because use cases are meaningless without targeted activities.

More importantly,  they are associated with different architecture layers and governed by different concerns:

  • At business level (business processes or activities), the perimeter and granularity of requirements must be congruent with the continuity and consistency constraints of business objects and operations.
  • At functional level (use cases), the span and granularity of interactions between system and users must coincide with execution paths. But the rationale governing users interactions is not the same as the one governing the integrity of business processes. As a consequence, the paths considered for development may pick sequences of operations defined by business processes but should not define them anew based upon interaction constraints.
  • Finally, assuming that use cases see systems as black-boxes, their footprint should not depend on decisions taken at technical level.

Those concerns can be dealt with separately if projects scope is explored iteratively, e.g using activity diagrams for business logic and use case diagrams for users interactions.

Iterative Mapping of Project Footprint

Iterative development is not just about increments but, first and foremost, about exploring development spaces. That is especially useful when projects overlap architecture layers and cannot rely on fully fledged requirements.

Such projects have to deal with two challenges:

  • They must identify and manage work units according to the state of requirements and the nature of dependencies (business, organization, or technology, …).
  •  They must carry on with developments based on incomplete specifications while exploring alternatives and deferring decisions until the “last responsible moment” when further delay would limit the options at hand.

Taking a leaf out of the agile book, projects should be driven by users’ value, with requirements first introduced as users’ stories. From that springboard, as informal and incomplete as could be, stories must be fleshed out and organized in order to support the reasoned exploration of project scope.

At inception stories are no more than a user, an objective, and an activity, all set at business level independently of the part played by systems. Scope exploration must therefore begin with activities backbone and be furthered with variants, aka scenarii.

cccc
Adding execution paths to project scope

Given a set of business scenarii, the candidates for system support must be ordered and mapped to system features, actual or planned. That should provide a blueprint for development paths. Unfortunately, as variants are added to plots, narratives can easily turn messy, mixing features and capabilities across architecture layers. And that’s where the benefits of use cases are to be found.

Development Paths: From Users’ Stories to Use Cases

Whatever the context, iterations are formal constructs defined by invariants, increments, and exit conditions. When applied to development spaces, iterations are defined by:

  • Invariants: conditions on architectural assets supporting the scenario under consideration.
  • Increments: features or variants added to scenarii.
  • Exit condition: no more features or variants (empty backlog) or time-out.

Applied to architecture layers, invariants provide for reasoned iterations and backlogs:

  1. Enterprise layer are the first to be considered: cycles are set for persistency and execution units and bound by domains (identification mechanisms, integrity constraints, and semantics); within cycles, increments target attributes, operations and variants.
  2. Functional layer come second: cycles are set for interaction units (aka use cases), and bound by the continuity and consistency business objects and activities; increments target transient attributes and operations. 
  3. Technical layer come last: cycles are set for platforms and bound by functional units.

But there is a catch: while users’ stories and activity (or business process) diagrams are set at enterprise level, development projects are considered at system level; because systems functionalities are not supposed to appear in users’ stories or activity diagrams, there could be a gap between business and functional requirements. As it happens, use cases provide a bridge: on one hand they focus on the interactions between users and systems, on the other hand their basic and alternate flows can be directly mapped to the paths in activity diagrams.

From alternative flows to Development Paths
From alternative flows to Development Paths

That provides a clear and sound basis for the definition of development paths: on one hand alternate flows can be ranked according users’ priorities; on the other hand they determine the sequence of use cases that will have to be developed.

Backlogs and Pathfinders

While the objective of users’ stories is to tie up projects in business value, the objective of use cases is to anchor them in the context of system functionalities. That perspective, and the role of models, may be ignored for standalone projects, but it is necessary when project development paths are to be governed both by business and functional dependencies, described respectively by users’ stories and use cases.

In that case the exploration of development paths should be guided by invariants set along MDA model layers: computation independent (business processes), platform independent (systems functionalities), and platform specific (technology platforms).

Projects are rooted in use cases but development paths are governed by users' stories.
Projects are rooted in use cases but development paths are governed by users’ stories.

When projects are rooted in business activities (a), e.g the possibility of upgrading a customer, stories describe execution paths and are ranked according business priorities. Iterations will proceed with development and new cycles added for alternative paths to the basic one.

Depending on context dependencies, development projects can be directly initiated from given sequences of activities (b) or conducted in parallel with users’ stories. Use cases remain the option of choice when the features supporting users’ stories are meant to be shared (e.g checkout). In that case the development paths are governed both by users’ value and functional dependencies. When features are deemed specific (e.g upgrade), use cases can be bypassed and development paths explored simultaneously according users’ and development concerns.

Backlog organization is more complex when development paths cross the divide between functional and technical concerns. Ideally, one would expect a clear separation of concerns, with use cases defined independently of technical options, just like business logic doesn’t depend on system functionalities. But alternatives may be blurred due to the dependencies between interactions design and platform capabilities, the risk being to associate technical options with functional variants, e.g specialized use cases.

Entangled development paths: self check out depends on technical platform.
Entangled development paths: self check out depends on technical platform.

That’s the case when features can only be implemented on specific platforms. If those features are also specific the corresponding development cycle can be managed as a whole. Otherwise the relevant decisions should be factored out. The same principle applies for features supporting different business processes.

Squaring the Circles: From Epics to Releases

Iterations run within boundaries set by invariants, and with regard to projects scope, those invariants are set by architecture capabilities: enterprise on one hand, systems on the other hand.

From the enterprise perspective, development projects (b) are meant to support business objectives (a), not to define them. As a consequence, users’ stories must remain within borders set upfront. That can be achieved by introducing business projects (aka strategies, aka epics) and portfolios of development ones.

Development Paths: (a) Portfolio of business objectives with associated users' stories and architectural capabilities; (b) targeted features; (c) releases.
Development Paths: Portfolio of business objectives (a), associated backlogs of users’ stories (b), targeted features (c), architectural capabilities (d),  and releases (e).

From the system perspective, a clear distinction should be maintained between projects supported by platforms capabilities (b), and projects targeting platforms capabilities (d). Eventually, those different levels of explorations will have to be consolidated as releases (e), and that is where one may find the agile answer to waterfall.

A Time for Every Purpose: Time-boxes as Pace-Makers

As Einstein famously said, “The only reason for time is so that everything doesn’t happen at once.”  In other words time is what happens between events, and the use of a single time-frame will put all events under the same rationale.

But architectures are best understood as shearing layers whose events are governed by different rationales, respectively: business opportunities, engineering constraints, or operational needs.

That is arguably the critical flaw of waterfall solutions as they force business, development, and operations under the same set of strictures. And that’s why agile’s solution to components release may be its pivotal innovation as it establishes the autonomy of those three layers and introduces time-boxes as their pace-makers.

Further Reading

Spaces, Paths, Paces (Part 2)

Objective

As previously noted, embarking for sizable and lengthy project on the assumption that detailed scope and schedules can be set upfront is a very hazardous policy. Alternatively, agile development models carry out specifications, development, and quality assurance into integrated iterations, making room for a progressive exploration of problem spaces and solution paths, and consequently for informed decision-making and better risk management.

Salvador-Dali-Caravan_1.jpg
Paths & Paces (S.Dali)

Yet, whatever the method, at some point, scope and features will have to be committed; and with targeted features set dynamically, planned schedules are no more an option. Hence the need to reconsider the way time is taken into account.

Dependencies: Playing for Time

Paraphrasing Einstein, one may say that the only reason for processes is so that everything doesn’t happen at once. Why is that ?

First, processes are meant to support informed decisions. If problem spaces and solution paths cannot be settled upfront they must be done progressively. And that will clearly introduce informational dependencies supporting:

  • Decisions about what is to be done: alternative paths and their priorities
  • Decisions about how it should be done: supported features and their priorities
  • Decisions about how it can be done: quality assurance and acceptance.

In that context the objective of development processes is to do as much of definition, building, and acceptance, and to delay decisions in order to gather the most of the relevant information until the “last responsible moment”, i.e without preempting any of the initial set of alternative options.

Assuming informed decisions are supported by architecture knowledge, processes must take into account engineering constraints regarding:

  • Technical dependencies associated to the nature of development flows and environments.
  • Functional dependencies associated to products functionalities and supporting systems.
  • Organizational dependencies associated to business units and localization.

Iterating across architecture layers
Iterating across architecture layers

While those dependencies are defined within architecture layers, their impact on the organization of work units may have to be consolidated when projects are carried out across layers.

Schedules: Running for Time

Whether development cycles stay within or run across architecture layers, there will be no way to decide about deliveries and schedules at project inception. In other words dependencies will have to be sorted out and planning settled along the road.

Project planning means consolidating overlapping time-frames. That may not be a problem for projects set within single architecture layers (e.g migration), but that should definitively be taken into account when heterogeneous time-frames govern changes across architecture layers. With regard to changes managed at project level (endogenous events), the consolidation may be done within project time-frame; but that will not be possible for changes occurring in non managed environments (exogenous events).

Those events are set in time-frames governed by their own rationale (e.g business, organization, or technology) that cannot be subsumed into the engineering timetable:

  • At enterprise level, time is set by business context. Both business objectives and business process solutions are meant to be decided with regard to actual (aka exogenous) business opportunities (a).
  • At system level, time is set by project planning (endogenous events). Given functional requirements (e.g users’ stories or use cases), architects and designers have to decide about the scheduling of systems functionalities and services, and the release of corresponding applications. Since those decisions are not directly exposed to exogenous events, they can be made according to engineering constraints and resources availability (b).
  • At platform level, time is set by business and operational objectives (endogenous events), and technical contexts (exogenous events). Assuming that risks are evenly set, the problem is to align endogenous (managed by project) with exogenous (anticipated from contexts) events: too early releases may preclude later but more useful ones; too late releases may hamper operations (c).

The texture of time differs across architecture layers (yellow for endogenous, red for exogenous)
The texture of time differs across architecture layers (yellow for endogenous, red for exogenous)

Whereas those time-scales are to be synchronized, there is no reason they would be congruent. Hence the need of some mechanism, static (e.g milestones) or dynamic (e.g backlogs) supporting the scheduling of work units.

From Milestones to Backlogs

As considered elsewhere, phased models of development may offer some benefits when dependencies originate from different environments or involve different authorities; yet they may impose still bigger penalties when requirements cannot be fully settled upfront. Conversely, agile approaches are the option of choice when complex problem spaces and solution paths are to be explored and refined progressively; but may prove ineffective in dealing with business, organizational, or technical dependencies set from outside projects. That difference of perspective is reflected by the mechanisms used to manage dependencies: milestones or backlogs.

Sorting out dependencies means consolidating informational and engineering constraints across architecture layers. When problem spaces and solution paths cannot be managed under the same authority (heterogeneous dependencies) phased development models use milestones to consolidate expectations and commitments across layers and time-frames.

Otherwise (homogeneous dependencies) agile development models use backlogs to explore problem spaces and solution paths, whatever the architecture layer and nature of dependencies. With the benefit of shared ownership, business events are propagated all along development paths, governing backlogs of stories (business requirements), features (engineering constraints), and releases (operational requirements).

Development processes must consolidate organizational, functional, and technical dependencies.
Backlogs can be used to consolidate organizational, functional, and technical dependencies.

Along that perspective  backlogs and milestones can be seen as mirrored solutions of the same problem, namely how to deal with the functional and timing dimension of dependencies: backlogs deal only with the functional dimension as they consider the dependencies between tasks without taking into account their actual timing; milestones look from the opposite direction, anchoring aggregate tasks to timetables before considering dependencies between items. Depending on the point of view:

  • Backlogs may appear as stacks of stones: the milestones are fragmented and the initial sequence translated into queues.
  • Milestones may appear as flattened backlogs: dependencies are frozen and stories are ironed out before being merged into heaps.

xxx
Milestones can be seen as “flattened” backlogs, or, alternatively, backlogs as stacked milestones

Hence, with regard to the ranking of dependencies, the difference between backlogs and milestones is of granularity, finer for the former, coarser for the latter. But with regard to execution, the difference is of nature: contrary to milestones, backlogs don’t fix any timing.

That distinction between sequence (functional ranking) and schedule (time ranking) may be critical when iterative development has to be combined with heterogeneous dependencies.

Collaboration Levels

As noted above, the planning of work units must take into account three criteria:

  • The dependencies between tasks, engineering or informational, determine the sequencing independently of actual timing. They are derived from technical or organizational constraints.
  • The schedules anchor the start and completion of tasks to time-frames. Some time-frames are set within the enterprise (e.g resources availability), others are set from outside (e.g regulations or business opportunities).
  • Pace determine the elapsed time taken to complete a task. It may be seen as a metronome used to adjust the throughput depending on resources availability on one hand, quality requirements on the other hand.

Milestones and Backlogs Capabilities
Milestones and Backlogs Capabilities

At first sight, comparison shows backlogs to outdo milestones on all three accounts: dependencies are managed at finer granularity, which enables dynamic scheduling and releases management; last but not least, due to iteration cycles and time-boxing, development paces can be aligned with resources and quality requirements. Yet, that edge can be misleading if dependencies translate into unmanageable or multiple backlogs.

Collaboration is at the core of iteration cycles, and it can only be achieved through shared and dynamic management of backlogs. Assuming a set of stories, each new cycle has to be preceded by decisions regarding priorities, refinements, and responsibilities; and if informational dependencies have to be taken into account, these decisions must be taken collectively and directly:

  • Collectively: decisions about priorities and refinements must be negotiated between team members from both sides of the business/system (or users/developers) divide.
  • Directly: all decisions must be taken by the team itself, without any organizational mediation or external interference.

But those conditions could be thwarted if different projects have to develop stories with shared features implemented on different platforms.

Shared features and cross implementations
Shared features and cross implementations could thwart collective and direct decision-making.

In that case consolidation mechanisms could bring back fixed scope/schedule configurations:

  • Different teams, each with its own specific users and developers concerns, will have to commit to agreed features and timetables.
  • By involving independent organizational units, those decisions will entail some procedural mediation carried out independently of iteration cycles.

Yet, that shouldn’t necessarily be the end of the stories, providing some mechanism could be found to synchronize iteration cycles. And that could be achieved with blackboards.

From Backlogs to Blackboards

Blackboards can be understood as shared backlogs stripped from their ranking mechanism so that items can be consulted and dealt with by different project teams. Alternatively, they can also be seen as timetables stripped from scheduling mechanism. Either way, those views illustrate how blackboards may support shared dependencies without forcing them into time-frames:

  • Shared features are posted on a single blackboard where their status can be consulted and updated by the teams in charge of the stories concerned.
  • Development teams post their releases on blackboards according to targeted platforms.

Blackboards are used to manage shared dependencies
Blackboards are used to manage shared dependencies and differentiated stories’ contents

As it happens, that approach also answers the recurring question of stories’ nature and granularity: with the benefit of blackboards, fine-grained stories can be indexed as business cases, use cases, or new releases and cohabit in backlogs.

Combining backlogs with blackboards provides a collaboration mechanisms supporting external dependencies without impairing teams ownership of iteration cycles. Yet it is not a substitute for schedules as it doesn’t deal with the alignment of cycles with enterprise time-frames.

Time boxes as Pacemakers

Milestones and backlogs are synchronization mechanisms. The former, combined with timetables, coordinate teams along time-spans;  the latter, combined with time boxes, coordinate tasks between cycles. Hence, while both aim at the same objective, namely that everything doesn’t happen at once, their understanding of time is very different: timetables are bound to an external measure of time, time boxes are just arbitrarily fixed intervals that can be tethered to any time-frame.

As a consequence, time boxes can be applied at different levels. At system level they are associated to project backlogs and used to set the tempo of iteration cycles. At enterprise level they are associated to business objectives and anchored to strategic plans. Assuming those plans are described by epics, it would be possible to use different tempos depending on level (enterprise or systems) or even applications.

Time boxes can be combined with blackboards to synchronize tempos between enterprise level (T) and interrelated projects (T/x and T/y).
Time boxes can be combined with blackboards to synchronize tempos between enterprise level (T) and interrelated projects (T/x and T/y).

Ideally, differentiated tempos should foster an emerging harmony between melody (users’ stories) and accompaniment (supporting systems) converging into the fulfillment of strategic objectives. Practically, the alignment of releases with epics cannot be taken for granted.

Squaring the Circles: Project Planning

Agile approaches are based upon the dynamic exploration of problem spaces and the iterative development of solution paths, the objective being to maximize the value of functional requirements under the constraints of technical ones. Yet, with spaces and paths defined dynamically, standard exploration procedures like breath or depth-first traversal are useless because the ranking of paths is part of the solution, which means that priorities must be revised at the end of each cycle.

Nonetheless, explorations cannot be everlasting and there must be some end to the story, when stakeholders get what they expect (or accept what they get) and users can begin to reap the benefits. Fixed scope and schedules being ruled out, the problem is to align iterations outcomes (b) with business objectives (a). Solutions can be positioned between two archetypal approaches, one driven by business objectives, the other by development tasks.

ccc
How to align tasks (b) with objectives (a)

The first approach would see development teams take commitments with regard to broadly defined objectives: with problem spaces represented by trees progressively refined and explored, commitments can be made collectively on sets of solution paths within still undefined sub-trees (b1). The team will then take responsibility for the details of iteration cycles and will adjust its throughput as to align releases with business objectives.

Alternatively, and providing a finer granularity can be obtained and managed, stories could be broken down into tasks for which commitments will be made individually by team members (b2). Assuming that the tasks workloads can be assessed, iterations could then be planned and releases scheduled on the basis of time boxes parameters.

Commitments can be made collectively with regard to objectives (b1) or individually with regard to stories (b2).
Commitments can be made collectively with regard to objectives (b1) or individually with regard to tasks (b2).

Not surprisingly, the choice of a planning policy is to be conditioned by the granularity of work units:

  • Task based planning requires finer grained stories and comes with a phased flavor by reintroducing analysis. That may stretch the intervals between iterations and increase management overheads.
  • Objective based planning allows for coarser grained stories and is more in line with agile spirit. Yet that may increase the length of iterations and affect their transparency.

All things considered, some feedback loop may be needed when deciding on the size of stories because shorter ones do not necessarily decrease the time to market as they may generate bigger backlogs and exponential overheads in case of complex dependencies.

Balancing Pushes and Pulls: Lean and Just-in-Time Workflows

When push comes to shove project planning turns into conflict management. That may happen with phased development models as well as with agile ones. With the former that will be due to applications forcibly pulled out whatever their value for users and reliability in order to meet unrealistic expectations. With the latter that will be due to requirements forcibly pushed into backlogs disregarding their size and exponential complexity.

The way out of this dilemma is a feedback mechanism between project teams pushing releases and business stakeholders pulling applications. And that is the rationale behind the Kanban development model:

  • Visualize workflow: up-to-date expectations, constraints, commitments and achievements must be clearly and selectively visible to all concerned. that can be done with backlogs and blackboards.
  • Limit work in progress: that can be obtained by regulating the selection of solution paths through limits on the size of backlogs and the fragmentation of users’ stories according to their architecture footprint.
  • Measure and manage flows: metrics and process design can be significantly improved if flows are differentiated depending on architecture layer (business, systems, platforms).
  • Make process policies explicit: that is already a cornerstone of agile backlog management; those principles should also apply to backboards.
  • Use models to recognize improvement opportunities: while initially ignored, the benefits of models in agile development is progressively acknowledged. Those benefits have be illustrated in the first part of this article by the use of activity diagrams for the exploration of problems spaces and solutions paths.

Applying those principles will bring about lean processes and just-in-time workflows, improving both users’ value and software quality.

Further Reading

External Links

From Stories to Models

Objective

Assuming, for the sake of the argument, that programs are models of implementations, one may also argue that the main challenge of software engineering is to translate requirements into models. But, contrary to programs, nothing can be assumed about requirements apart from being stories told by whoever will need system support for his business process.

vvvv
Telling Stories with Models

Along that reasoning, one may consider the capture and analysis of requirements under the light of two archetypal motifs of storytelling, the Tower of Babel and the Rashomon effect:

  • While stakeholders and users may express their requirements using their own dialects, supporting applications will have to be developed under the same roof. Hence the need of some lingua franca to communicate with their builders.
  • A shared language doesn’t necessary mean common understandings; as requirements usually reflect local and time dependent business opportunities and goals, they may relate to different, if not conflicting, aspects of contexts and concerns that will have to be consolidated, eventually.

From such viewpoints, the alignment of system models to business stories clearly depends on languages and narratives discrepancies.

Business to System Analyst: Your language or mine ?

Stories must be told before being written into models, and that distinction coincides with the one between spoken and written languages or, on a broader perspective,  between direct (aka performed) and documented communication.

Direct communication (by voice, signs, or mime) is set by time and location and must convey contexts and concerns instantly; that’s what happens when requirements are first expressed by business analysts with regard to actual and specific goals.

kanji_interpret
Direct communication requires instant understanding

Written languages and documented communication introduces a mediation, enabling stories to be detached from their native here and now; that’s what happens with requirements when managed independently of their original contexts and concerns.

kanji_rekap
Documented communication makes room for mediation

The mediation introduced by documented requirements can support two different objectives:

  1. Elicitation: while direct communication calls for instant understanding through a common language, spoken or otherwise, written communication makes room for translation and clarification. As illustrated by Kanji characters, a single written language can support different spoken ones; that would open a communication channel between business and system analysts.
  2. Analysis: since understanding doesn’t mean agreement, mediation is often necessary in order to conciliate, arbitrate or consolidate requirements; for that purpose symbolic representations have to be introduced.

Depending on (1) the languages used to tell the stories and (2) the gamut of concerns behind them, the path from stories to models may be covered in a single step or will have to mark the two steps.

Context and Characters

Direct communication is rooted in actual contexts and points to identified agents, objects or phenomena. Telling a story will therefore begin by introducing characters and objects supposed to retain their identity all along; characters will also be imparted with behavioral capabilities and the concerns supposed to guide them.

ccccc
Stories start with characters and concerns

With regard to business, stories should therefore be introduced by a role, an activity, and a goal.

  • Every story is supposed be told from a specific point of view within the organization. That should be materialized by a leading role; and even if other participants are involved, the narrative should reflect this leading view.
  • If a story is to provide a one-lane bridge between past and future business practices, it must focus on a single activity whose contents can be initially overlooked.
  • Goals are meant to set specific stories within a broader enterprise perspective.

After being anchored to roles and goals, activities will have to be set within boundaries.

Casings and Splits

Once introduced between roles (Who) and goals (Why), activities must be circumscribed with regard to objects (What), actions (How), places (Where) and timing (When). For that purpose the best approach is to use Aristotle’s three unities for drama:

  1. Unity of action: story units must have one main thread of action introduced at the beginning. Subplots, if any, must return to the main plot after completion.
  2. Unity of place: story units must be located into a single physical space where all activities can be carried out without depending on the outcome of activities performed elsewhere.
  3. Unity of time: story units must be governed by a single clock under which all happenings can be organized sequentially.

Stories, especially when expressed vocally, should remain short and, if they have to be divided, splits should not cross units boundaries:

  • Action: splits are made to coincide with variants set by agents’ decisions or business rules.
  • Place: splits are made to coincide with variants in physical contexts.
  • Time: splits are made to coincide with variants in execution constraints.

When stories refer to systems, those constraints should become more specific and coincide with interaction units triggered by a single event from a leading actor.

Filling the blanks

If business contexts, objectives, and roles can be identified with straightforward semantics set at corporate level, meanings become more complex when stories are to be fleshed out with details defined by the different business units. That difficulty can be managed through iterative development that will add specifics to stories within the casing invariants:

  • Each story is developed within a single iteration whose invariants are defined by its action, place, and time-scale.
  • Development proceed by increments whose semantics are defined within the scope set by invariants: operations relative to activities, features relative to objects, events relative to time-scales.

A story is fully documented (i.e an iteration is completed) when no more details can be added without breaking the three units rule or affecting its characters (role and goal) or the semantics of features (attributes and operations).

sculptures_Modele
Iterations: a story is fully fleshed out when nothing can be changed without affecting characters’ features or their semantics.

From Documented Stories to Requirements

Stories must be written down before becoming requirements, further documented by text, model, or code:

  • Text-based documentation uses natural language, usually with hypertext extensions. When analysts are not familiar with modeling languages it is the default option for elicitation and the delivery of comprehensive, unambiguous and consistent requirements.
  • Models use dedicated languages targeting domains (specific) or systems (generic). They are a necessary option when requirements from different sources are to be consolidated before being developed into code.
  • Code (aka execution model) use dedicated languages targeting execution environments. It is the option of choice when requirements are self-contained (i.e not contingent to external dependencies) and expressed with formal languages supporting automated translation.

Whatever their form (user stories, use cases, hypertext, etc), documented requirements must come out as a list of detached items with clearly defined dependencies. Depending on dependencies, requirements can be directly translated into design (or implementation) models or will have to be first consolidated into analysis models.

Telling Models from Stories

Putting aside deployment, development models can be regrouped in two categories:

  • Analysis models describe problems under scrutiny, the objective being to extract relevant aspects.
  • Design models (including programs) describe solutions artifacts.
sculptures_ad0
Descriptions and specifications look from different perspectives

Seen from the perspective of requirements, the objective of models is therefore to organize the contents of business stories into relevant and useful information, in other words software engineering knowledge.

Following the principles set by Davis, Shrobe, and Szolovits for Knowledge Management (cf readings), such models should meet two groups of criteria, one with regard to communication, the other with regard to symbolic representation.

As already noted, models are introduced to support communication across organizational structures or intervals of time. That includes communication between business and systems analysts as well as development tools. Those aspects are supposed to be supported by development environments.

As for model contents, the ultimate objective is to describe the symbolic representations of the business objects and processes targeted by requirements:

  • Surrogates: models must describe the symbolic counterparts of actual objects, events and relationships.
  • Ontological commitments: models must provide sets of statements about the categories of things that may exist in the domain under consideration.
  • Fragmentary theory of intelligent reasoning: models must define what artifacts can do or can be done with.

The main challenge of analysis is therefore to map the space between requirements (concrete stories) and models (symbolic representations), and for that purpose traditional storytelling may offer some useful cues.

From Fictions to Functions

Just like storytellers use cliches and figures of speech to attach symbolic meanings to stories, analysts may use patterns to anchor business stories to systems models.

Cliches are mental constructs with meanings set in collective memory. With regard to requirements, the equivalent would be to anchor activities to primitives operations (e.g CRUD), and roles to functional stereotypes.

vvvvv
Archetypes can be used to anchor stories to shared understandings

While the role of cliches is to introduce basic items, figures of speech are used to extend and enrich their meanings through analogy or metonymy:

  • Analogy is used to identify features or behaviors shared by different stories. That will help to consolidate the description of business objects and activities and points to generalizations.
  • Metonymy is applied when meanings are set by context. That points to aggregate or composite objects or activities.

Primitives, stereotypes, generalization and composition can be employed to map requirements to functional patterns. Those will provide the building blocks of models and help to bridge the gap between business processes and system functionalities.

Further Reading

External Readings

UML & Users’ Concerns

Objective

Whereas UML has been brought to existence by very wise men under very propitious skies, the initial enthusiasm and first successes have never been transformed into wider acceptance and customary usage; subsequent updates and extensions didn’t help and may even have triggered some anticlimax. More than fifteen years after its launch, the utilization of UML remains limited, both in breadth (projects developed) and depth (features effectively used).  Moreover, the UML house is deeply divided and there isn’t much consensus among the few that use it comprehensively and consistently, principally to support domain specific languages (DSL).

Babel_aDesmet
The Divided House of UML (Anne Desmet)

Certainly, there must have been a wrong turn somewhere, possibly at the UML2 crossing when the OMG committee lost sight of users modeling needs and took the road to meta-models. Considering UML’s shrinking stamp and dwindling relevancy, that road appears more and more like a dead-end; but it may be still possible to get back on track and retrieve the Us of the UML: unified semantics for all and sundry users.

Where to Look

Whether on driving or back seats, respectively for model driven or agile methods, models are widely accepted as a necessary constituent of development processes. Nonetheless, and despite being the only official standard, UML standing appears to falter, up to be already seen as a cold case. As suggested by Ivar Jacobson (“The road ahead for UML“), one of its main drawback would be its lack of modularity with regard of users needs. If that flaw is to be fixed, the question is where to look: directly at language level, or at supporting mechanisms.

Given the broad consensus that surrounded the initial project, one should at first look for a sound and stable subset to be used as a backbone and fleshed out according specific contexts, purposes or users. As a matter of fact that is what stereotypes and profiles are meant to do, except that without a well-defined backbone of unambiguous constructs, the only possible outcomes are domain specific languages. So, one should first consider how the separation of concerns  could be better supported by language constructs.

Language Constructs, Model, and Separation of Concerns

Separation of Concerns

Despite its roots in the Object Oriented paradigm, UML has demonstrated its adaptability to all and every method or domain. Unfortunately, being a Jack of all trades often means a master to none, and the use of UML is clearly frustrated by its versatility; that translates either into shallow usage of ambiguous semantics, or into extensions targeting specific domains or technologies.

On the ground, three mechanisms can be used to make for the lack of focus: stereotypes, views, and customization.

  • UML stereotyping mechanism support predefined constructs for problem (business objects and processes) or solution (system architecture and object design) spaces. Stereotypes can be grouped into profiles, e.g for specific business domains or technical architectures.
  • Views (or perspectives) organize access to models according contents: logical, physical, conceptual, pragmatic, etc
  • Tool customization  organizes access to models according users purposes and skills: analyst, architect, designer, developer, etc.

While those approaches have their benefits, they are set independently of languages constructs, either as UML extensions (stereotypes and profiles), or defined from outside by development methodologies (views) or projects organization (customization). As a consequence, they have little or no effect on the simplicity or efficiency of UML; they may even add to confusion and complexity when overlapping stereotypes are introduced to support multiple taxonomies, e.g technical architectures and business domains.

uml_seprconc3
Language Constructs (a), Stereotyped Model (b), Combined Views or Profiles (c)

That may point to a clear direction: given the potency of the stereotyping mechanism and its pivotal role in UML utilization, significant benefits could be achieved through a better integration into core language constructs, even if that entails some constraints or limitations. Two straight modifications should be considered:

  • Model layers: language constructs should be re-organized along architectural concerns for enterprise (business processes), system (functionalities), and platforms (components).
  • Stereotypes visibility: language constructs should support the distinction between local taxonomies and “unified” ones, the former set with limited scope and visibility, the latter meant to be applies across layers.

While both modifications can be carried out on their own, their benefits would be boosted if they were set within the broader MDA framework and supported by specific language constructs.

Modular Language Constructs

Given the growing intricacy, ubiquity and diversity of systems, UML complexity and versatility should clearly be in demand, and the problem is to harness those capabilities according the needs and skills of the different kinds of users.

That’s arguably a critical flaw of UML, which lumps together essential with secondary constructs, as well as definite with ambivalent semantics. That brings weighty consequences, both for users and models:

  • Steep or even abrupt learning curve: confronted to a wall of mixed constructs users have to master the whole upfront, whatever their needs and skills.
  • Blurred concerns: describing various specific contents with the same ambivalent constructs will either distort language semantics, or blur concerns specificities.
  • Corrupted transformation: whatever the modeling tools, the bad apples of inputs will usually corrupt the whole of outputs. In other words any advance in model driven development requires a sound backbone of unambiguous language constructs.

As noted above, language constructs can be regrouped along two perspectives, one directly associated with users architectural concerns, the other  reflecting the scope and visibility of targeted artifacts. While there is no particular reason to match complexity levels with architectural concerns, mapping them to granularity has a clear rationale. Such a “born again” UML would distinguish between two levels of language constructs:

  1. Those pertaining to objects and activities identified by architectures, whatever their nature: enterprise, systems or platforms.
  2. Those used to describe internals of objects and activities independently of their aspects and behaviors at architecture level.

uml_cornot
Model Transformation: lumped (b) vs differentiated (a) language constructs.

That re-configuration would bring modularity to the language, enabling a smooth learning curve. More importantly, a clear-cut separation of concerns will enable some kind of Just-In-Time model transformation:  instead of cumulative noises (b), one will get separate transformations for models architectural backbone on one hand, contingent specificities on the other hand (a). And that could be a real game-changer for lean and fit models.

While that could be achieved by different means, a simple solution would be to use the stereotyping mechanism to describe supporting structures of enterprise, functional, and technical architectures.

Transformation vs Portability

Model transformation is about changing contents within the same environment, portability is about moving the same contents across different environments; and despite apparent similarities, they deal with different concerns, set by users for the former, by tools vendors for the latter.

Transformation is normally performed under a single corporate roof according agreed semantics; as a corollary, it is meant to cover the full contents of models. That’s not the case for portability, whose primary objective is the exchange of consolidated contents between heterogeneous environments; while sources and targets may have to share the whole of their models, a sound policy should make room for selective portability of specific or confidential contents.

The Meta-Object Facility (MOF) is the solution of choice for portability. As a meta-language it is used to describe language constructs at source and target environments; mapping rules can then be defined and bridges built between environments. As it is, those bridges usually scale very poorly due to the exponential complexity of rules having to cover all and every model idiosyncrasies; and that’s unfortunate for portability which, instead of focused targets, has to deal with overweight models cluttered with useless contents (b).

uml_porta
Portability between modeling environments: Lumped (b) vs Differentiated (a) constructs.

That situation would be greatly improved (a) if the wheat of consolidated constituents could be separated from the chaff of ambiguous or irrelevant contents. On a broader perspective that will open the way leaner and fitter models.

One step back may put UML back on track

There is something of a consensus among the software engineering community regarding (1) the benefits of models and (2) the failures of UML. As should be expected, that consensus translates into fragmented modeling practices and, more generally, software engineering methodologies. Obviously there isn’t much of a future for UML along that path, but the case is still open and the trend can be reversed by putting users needs back on UML driving seat.

Further Reading

External Links

Models in Driving Seats

Objective

Model driven software engineering (aka MDA, aka MBSE, aka MDE) has had a great future for quite some time, yet there isn’t much consensus about what that could be and, in particular, about what kind of models should be in the driving seat.

Shadowing Reality

Pending some agreement about model contents (e.g specific ?) and capabilities (e.g executable ?), the driving of software engineering processes will probably remain more practices than principles.

Shadowing Reality

Models are shadows of reality, with their form and contents set by contexts and concerns. They can be characterized by their purpose and capabilities.

Regarding purpose, models fall in two groups: descriptive models deal with problems at hand, prescriptive models with solutions.

  • Descriptive models are partial and biased representations of actual contexts. Partial because they only deal with relevant objects, activities and features; biased because the selection is made on purpose.
  • Prescriptive models are complete descriptions of artifacts.

Regarding capabilities the distinction is between intensional and extensional languages:

  • Extensional (aka denotative) languages deal with sets of identified instances of objects and activities. As they condone partial or ambiguous statements, they are best used for descriptive models.
  • Intensional (aka connotative) languages deal with the semantics and constraints of symbolic representations. Due to their formal capabilities they are best used for prescriptive models.

Along that reasoning  System Models can be characterized along architecture contexts: business processes (enterprise), functionalities (systems), and platform implementations (technologies):

  • Business models are descriptive and built with extensional languages (business is often said to bloom on discrepancies). They are necessarily partial as they target specific contexts and concerns.
  • Functional (aka analysis) models are prescriptive and built with intensional languages as they must specify the semantics and constraints of symbolic representations. Yet they are not necessarily complete since they don’t have to cover every details of business processes or implementations (cf traceability).
  • Implementation models (aka design) are prescriptive with no use for extensional capabilities since the relevant physical objects, i.e extensions, are system components directly derived from system specifications. However, they must support complete and formal descriptions of component features.

Business object, analysis and design, implementation.

Models don’t Talk Alone

Models are built with logographic languages that should not be confused with phonetic ones: whereas the latter convey information sequentially, the former build semantics from different sources; that enables models to be read from different perspectives. Contrary to programs whose semantic is bound to a fixed sequential execution, models don’t talk alone, but must be chatted to. Even when their readings are sequential, the sequences are governed by readers, not by models.

That point is pivotal if model transformation, arguably a pillar of model driven development, is to be supported along different perspectives and according different concerns.

Besides, it must be noted that while models can be fully translated into (and reversed from) sequential representations (e.g with XMI), transcripts are just projections and should not be confused with models as such.

Models don’t Walk Alone

Like talks, walks are sequential as they advance step by step. Hence, while some models can be walked (aka executed), such walks are only instances of behaviors supported by models.

That should be especially clear for system models which describe architectures combining agents and devices as well as software components, contrary to programs which are limited to software components structure and sequential (or parallel) execution.

Just like XMI transcripts should not be confused with original models, “executable” models should not be confused with fully fledged ones because they simulate the behaviors of agents and devices as if they were software components. While that may be useful for models targeting software components within a given architecture,  ignoring the specificities of software, agents, and devices would be pointless when the objective is to design a system architecture.

Abstraction Levels

Defining models as abstractions may be correct but not very helpful when deciding what kind of abstraction should drive development processes. First, the question is to define how concerns and purposes should be used to define abstractions, in other words to set apart significant from non-significant information. Then, in order to avoid flights for always higher abstractions without burying models into ground details, some principles are needed regarding specialization and generalization.

When systems are considered, abstraction levels are set by enterprise organization, systems functionalities, and platform technologies, with concerns expressed by business, functional, or technical requirements. Given a hierarchy of concerns, models must be set at the proper level of abstraction: below that level part of information will be redundant or irrelevant; above, useful features and classifications may be overlooked as some information will be either wanting or will not discriminate between variants.

Such levels can be identified by selectively applying specialization and generalization to constrained hierarchies:

  1. Inheritance should not cross model layers: hierarchies of business, functional and technical artifacts must be kept separate.
  2. Structures and behaviors pertain to different concerns: abstractions of objects and aspects should be managed independently.
  3. Specialization should be applied when subset of identified objects or behavior are associated with specific features. Generalization should be introduced when models must be consolidated.

Such an approach brings significant benefits for reuse, arguably one of the main objective of abstraction. And that appears clearly when developments are governed by models and architecture components and mechanisms are to be shared across model layers.

Reuse of business and functional models along functional tiers, with abstractions for structures (green) and aspects (orange).

Driving Models and Roadmaps

Systems engineering has to meet three kinds of requirements: business needs, system functionalities, and components implementation. In a perfect world there would be one stakeholder, one architecture, and one time-scale. Unfortunately, projects may involve different stakeholders, target different architectures, and be set along different time-frames. In that case milestones and roadmaps are to be introduced in order to bring all expectations and commitments under a common roof, with models providing the glue between them. That may be achieved with models defined along MDA layers:

  • Computation Independent Models (CIMs) describe business objects and processes independently of supporting systems.
  • Platform Independent Models (PIMs) describe how business processes are supported by systems seen as functional black boxes, i.e disregarding the constraints associated to candidate technologies.
  • Platform Specific Models (PSMs) describe system components as implemented by specific technologies.

From Models to Roadmaps

Interestingly, both extremities of development processes are textual descriptions, informal at the beginning, formal at the end, with models providing bridges in between. As noted above, those bridges are not always necessary: texts can be directly translated into instructions (as illustrated by voice command devices), or project teams with shared ownership can develop programs according users’ requirements (as promoted by Agile methods). Yet, the question should always be asked, and when textual requirements cannot be directly developed into programs the first task should be to draw the shortest modeling path.

Roadmaps and Meta-models

Model driven tools may appear under different guises yet most rely on meta-models and the Meta Object Facility (MOF). Given that meta-models are models of models, they are supposed to focus on a subset of relevant features selected on purpose, which, for driving models, would be some kind of road signs governing models transformation. What that could be? Two approaches are to be considered:

  • Language translation:  as presented by the report of the Dagstuhl Seminar, meta-models can be designed according their generic transformation capabilities and used to single out language constructs in order to transfer model contents into another language.
  • Separation of concerns: as development advances and models take into account different concerns, meta-models can be used to monitor and control the selective processing of corresponding contents. That could be achieved if transformations were governed by traffic signs singling out relevant features and waving aside the leftovers.

Each option points to a different perspective, the former targeting tools providers, the latter aiming at modellers. Whereas MDA layers (for business, functionalities, and technology) clearly point to models built with the Unified Modeling Language according organizational, functional and technical concerns, most of current implementations follow the language option; while those tools may be (theoretically) open at technical level, they usually rely on domain specific languages. By narrowing functional scopes and relying on automated translation to bridge the gaps, solutions based upon domain specific languages can only provide local solutions to fragmented problems. That road could be a bumpy one for model driven engineering.

Postscript

Thinking again,  the “MDA” moniker can be misleading as it may blur the distinction between models and their contents: given that MDA model layers effectively correspond to architecture levels, the pivotal MDA contention is that the modeling of systems is meant to be driven by enterprise architecture divides.

Further Reading

The Economics of Reuse

Objectives

Reusing artifacts means using them in contexts that are different of their native ones. That may come by design, when specifications can anticipate on shared concerns, or as an afterthought, when initially unexpected similarities are identified later on.

Economics of Reuse (Sergio Silva)
Economics of Reuse (Sergio Silva)

Planned or opportunistic, reuse brings benefits in terms of costs, quality, and continuity:

  • Cost benefits are most easily achieved for component engineering but may also be obtained upstream with model reuse and patterns.
  • Quality benefits are first and foremost rooted at model level, especially when components implementation is supported by automated tools.
  • Continuity benefits are to be found both along the business (semantics and business rules) and engineering (functional architecture and platform implementations) perspectives.

Reuse policies may also bring positive externalities by inducing a comprehensive approach to software design.

Yet, those policies will usually entail costs and may as well bear negative externalities:

  • Artifacts designed for reuse are usually most costly to develop, even if part of additional costs should be ascribed to quality management.
  • Excessive enforcement policies may significantly hamper projects ability to meet business needs in time.
  • Managing reusable assets usually induces overheads.

In order to assess those policies, economics of reuse must be set across business, engineering or architecture perspectives:

  • Business perspective: how to factor out and reuse artifacts associated with the knowledge of business domains when system functionalities or platforms are modified.
  • Engineering perspective: how to reuse development artifacts when business contents or system platforms are modified.
  • Architecture perspective: how to use system components independently of changes affecting business contents or development artifacts.

That can be achieved by managing development assets along model driven architecture: CIMs for business and enterprise architecture, PIMs for systems functional architecture, and PSMs for systems technical architecture.

Contexts & Concerns

Whatever their inception, reused artifacts are meant to be used independently of their native context and concerns: opportunistic reuse will map a specific purpose to another one, planned reuse will map a shared concern to a specific purpose. As a corollary, reuse policies must be supported by some traceability mechanism linking concerns and purposes across contexts and architectures.

From the enterprise perspective, the problem is to reuse the knowledge of business domains and processes. For that purpose different mechanisms can be considered:

  • The simplest solution is to reuse generic components, with the business knowledge directly transferred through parameters.
  • Similarly, business rules can be separately edited and managed in business contexts before being executed by system components.
  • One step further, business semantics and rules can be fenced off with domains and applied to different objects and applications.
  • Finally, models of business objects and processes can be capitalized and managed as reusable assets.

Reuse Policies with Model Driven Architecture

Once business knowledge is duly capitalized as functional assets, they can be reused along the engineering perspective:

  • System functionalities: functional patterns are (re)used to map functional requirements to functional architecture, and services are (re)used to support business processes.
  • Software architecture: Object and aspect oriented designs, using inheritance and polymorphism.
  • Software implementations: Component-based development and information hiding.

Along the architecture perspective information hiding is generalized to systems, and reuse is masked by the definition of services.

It must be noted that, contrary to misleading similarities, refactoring is the opposite of reuse: instead of building from well understood and safe artifacts, it tries to extract some reusable chunks from opaque and unsafe components.

Knowledge Reuse

Enterprise and business knowledge may affect the full scope of system functionalities: boundaries (e.g users authorizations), controls (e.g accounting rules), and persistency (e.g consistency constraints). Whereas there isn’t much to argue about the benefits of reusing enterprise and business knowledge, costs may significantly diverge depending on the way corresponding assets are managed:

  • Domain specific knowledge are rooted at requirements level. That’s typically achieved when use cases are introduced to describe how systems are meant to support business processes. With different use cases targeting different aspects of the same business objects and processes, overlaps must be identified and factored out in order to be reused across processes.
  • Business knowledge may also be global, i.e shared at enterprise architecture level, defining objectives, assets and organization associated with the continuity of corporate identity and business capabilities within a regulatory and market environment.

Use cases access to respectively shared (a,b) and specific (c,d) knowledge for objects and application domains

In any case, the challenge is to map business knowledge to system models, more precisely to embed reused descriptions of business objects and process to corresponding development artifacts. At architecture level the mapping should target objects or processes identities, at domain level the focus will be on aspects and views.

As epitomized by service oriented architectures, business architectures can be mapped to system ones through delegation, either directly (business processes calling on services), or indirectly (collaboration between services). That will establish a clear distinction between shared (aka global) and domain specific knowledge, and consequently between respective economics of reuse.

  • Given that shared knowledge must be reused across domains and applications, there can be no argument about benefits. That will be achieved by a messaging model built on a need-to-know basis. And since such model is an intrinsic feature of the functional architecture, it incurs no additional overheads.
  • Specific knowledge for its part is managed at domain level and therefore masked behind services interfaces. Whatever reuse occurs there remains local and an intrinsic part of domain design.

Knowledge Reuse through services (a), boundaries (b), controls (c), and entities (d).

Things are not so clear when business knowledge is not managed by services but distributed across domains, mixing specific and global knowledge. Managing reusable assets would be easy were the distinction between business and functional requirements to coincide with the one between shared and specific knowledge; unfortunately that’s seldom the case, and requirements, functional or business, will have to be sorted out at architecture or design level.

Whereas Service Oriented Architectures (SOA) put functionalities in the driving seat, Enterprise Application Integration (EAI) gives the lead to applications for which it provides adapters. As maintenance of integration adapters is a very poor substitute for knowledge management, reuse is mostly limited to legacy applications.

Maintaining adapters across layers of applications induces significant overheads

At design level knowledge is weaved into canonical data models (entities), functional architectures (controls), and user interfaces frameworks (boundaries).

Mixed concerns: business requirements can be specific, functional ones can be global.

On one hand, tracing reuse to requirements may be problematic as they are by nature concrete and unstructured, hence not the best support for generalization or the factoring out of shared features. Assuming business analysts can nonetheless separate reusable wheat from specific chaff, knowledge management at this level will require a dedicated framework supporting comprehensive and differentiated traceability. Additional overheads will have to be taken into account and compared to potential benefits.

On the other hand, canonical data models and functional architectures are meant to provide unified views of shared objects and semantic domains. Yet, canonical models are by nature unwieldy as they carve structures, features, and connections of business objects, without clear mechanisms to combine shared and specific knowledge. As a corollary, their use may reduce flexibility, and their management usually induces significant overheads.

Artifacts Reuse

With enterprise and business knowledge capitalized as development assets, the engineering case for reuse may appears indisputable, but business cases are often much more controversial due to large overheads and fleeting returns. Taking cues from Barry Boehm (“Managing Software Productivity and Reuse”),  here are some of the main pitfalls of artifacts reuse:

  • Repository delusion: knowledge being by nature contextual, its reuse is driven by circumstances and purposes; as a consequence the availability of large repositories of development assets will probably be ignored without clear pointers rooted in contexts and concerns.
  • Confusion between components (or structures) and functionalities (or interfaces): under the influence of the object oriented paradigm, the distinction between objects and aspects is all too often forgotten. That’s unfortunate as this difference is congruent with the one between business objects on one hand, business operations on the other hand.
  • Over generalization: reuse is usually achieved by factoring out useful aspects or factoring off useless ones. In both cases the temptation is to repeat the operation until nothing could be added to the scope. Such “flight for abstraction” will inevitably overtake the proper level of reuse and begets models void of any anchor to business relevancy.
  • Scalability: while reuse is about separation of concerns and complexity management, those two criteria don’t have to pull in the same direction. When they don’t, variants will be dispersed across artifacts and their processing will suffer a combinatorial explosion if the system has to be scaled up.
  • Obsolescence: shelf lives of development assets can be defined by each or both business or technical relevancy. Assuming spans either coincide or are managed independently, they should be explicitly taken into account before any reuse.

Those obstacles can be managed providing that models:

Sorting out reuse concerns with differentiated inheritance.

Economics of Reuse and Sustainable Development

Sustainable system development is the ability to meet present business requirements while enhancing system capability to support future ones. Clearly reuse is not the only factor of sustainability, with architectures, returns, and risk management being pivotal. But the economics of reuse encompass most of other factors.

Architectures are clearly first to be considered, as epitomized by MDA:

  • Reuse of development assets rooted in enterprise architecture is not an option: system functionalities are meant to support business processes as they are (a).
  • At the other end of the development process, reuse of software designs and components across technical architectures should bring benefits in quality and costs (c).
  • In between reuse of system functionalities is necessary to guarantee the robustness and continuity of functional architectures; it should also leverage the benefits of reuse of enterprise and development assets (b).

Reuse of models is at the core of the MDA framework
Reuse of models is at the core of the MDA framework

Regarding returns, reuse through generic components, rules engines, or semantic domains can be directly supported by development tools, bypassing explicit models of functional architectures. That makes their costs/benefits analysis both simpler and well circumscribed. That is not the case for system functionalities which stand at the hub of perspectives. As a consequence, costs/benefits should be analyzed as a whole:

  • Regarding business assets, a clear distinction must be maintained between specific and shared knowledge, reuse being considered for the latter only.
  • Regarding the reuse of business assets as functional ones, services clearly offer the best returns. Otherwise costs/benefits are to be assessed, from reuse of vocabulary and semantics domains (straightforward, limited overheads), to canonical models and enterprise application integration (contingent, significant overheads).
  • Economics of reuse will ultimately be set by traceability mechanisms linking enterprise and business knowledge on one hand, components designs on the other hand. Even for services (c), if at a lesser degree, the business case for reuse will be decided by leveraged benefits and non cumulative costs. Hence the importance of maintaining the distinction between identified structures and associated aspects from business (a) and functional (b) requirements, to components interfaces (d) and structures (e).

Maintaining the distinction between structures and aspects from business (a) and functional (b) requirements, to components interfaces (d) and structures (e).

Finally, reuse may also play a significant role in risk management, especially when risks are managed according to their source:

  • Changes in business contexts can usually occurs along two frequency waves: short, for market opportunities, and long, for an organization’s continuity. Associated risks could be better managed if corresponding knowledge were managed accordingly.
  • System architectures are meant to evolve in synch with organization continuity; were technological environment or corporate structures subject to unexpected changes, reusable functional assets would be of great help.
  • Given that enterprise IT can no longer be self-contained and operate in isolation, reusable designs may provide buffers to technological risks and help exploiting unexpected business opportunities.

Further Reading

Requirements Metrics Matter

Objectives

Contrary to some unfortunate misconceptions, measurements are as much about collaboration and trust as they are about rewards or sanctions. By shedding light on objectives and pitfalls they prevent prejudiced assumptions and defensive behaviors;  by setting explicit quality and acceptance criteria they help to align respective expectations and commitments.

How to put requirements into equations (Lawrence Weiner)
How to put requirements into equations (Lawrence Weiner)

Since projects begin with requirements, decisions about targeted functionalities and resources commitments are necessarily based upon estimations made at inception. Yet at such an early stage very little is known about the size and complexity of the components to be developed. Nonetheless, quality planning all along the engineered process is to be seriously undermined if not grounded in requirements as expressed by stakeholders and users.

Business vs Functional Complexity

Contracts without transparency are worthless, and the first objective is therefore to track down complexity across enterprise architecture and business processes. For that purpose one should distinguish between business and application domains, business processes, and use cases, which describe how system functionalities support business processes.

Processes are defined on domains, system functionalities are set by use cases

Based upon intrinsic metrics computed at domain level, functional metrics are introduced to measure system functionalities supporting business processes and compare alternatives solutions. Finally requirements metrics should also provide a yardstick to assess development models.

Business Requirements Metrics

The first step is to assess the intrinsic size and complexity of business domains and processes independently of system functionalities, and that can be done according symbolic representations:

  • The footprint includes artifacts for symbolic objects and activities, partitions (objects classifications or activities variants). Symbolic objects and activities are qualified as primary or secondary depending on their identification. The reliability of the footprint is weighted by the explicit qualification of artifacts as primary or secondary.
  • Symbolic representations for objects and activities are associated with features (attributes or operations) defined within semantic domains.

A part from absolute measurements a number of basic ratios can be computed for anchors (primary objects and activities) and associated partitions.

A robust appraisal of the completeness and maturity of requirements can be derived from:

  • Percentage of artifacts with undefined identification mechanism.
  • Percentage of partitions with undefined characteristics (exclusivity, changeability).

The organization and structure of requirements can be estimated by:

  • Average number of artifacts and partitions by domain (a).
  • Total number of secondary objects and activities relative to primary ones.
  • Average and maximum depth of secondary identification.
  • Total number of primary activities relative to primary objects
  • Total number of features (attributes and operations) relative to number of artifacts.
  • Ratio of local features (defined at artifact level) relative to shared (defined at domain level) ones.

Finally intrinsic complexity can be objectively assessed using partitions:

  • Total number of activity variants relative to object classifications.
  • Total number of exclusive partitions relative to primary artifacts, respectively for objects and activities.
  • Percentage of activity variants combined with object classifications.
  • Average and maximum depth of cross partitions.

It must be noted that whereas those ratios do not depend of any modeling method, they can nonetheless be used to assess requirements or refactor them according specific methods, patterns, or practices.

Functional Requirements Metrics

Functional metrics target the support systems are meant to bring to business processes:

  • The footprint of supporting functionalities is marked out by roles to be supported, active physical objects to be interfaced, events to be managed, and processes to be executed. As for symbolic representations, corresponding artifacts are to be qualified as primary or secondary depending on their identification, with accuracy and reliability of metrics weighted by the completeness of qualifications.
  • Functional artifacts (objects, processes, events, and roles) are associated with anchors and features (attributes or operations) defined by business requirements.

From business to functional requirements metrics

Given that use cases are meant to focus on interactions between systems and contexts, they should provide the best basis for functional metrics:

  • Interactions with users, identified by primary roles and weighted by activities and flows (a).
  • Access to business (aka persistent) objects, weighted by complexity and features (b).
  • Control of execution, weighted by variants and couplings (c).
  • Processing of objects, weighted by variants and features (d).
  • Processing of actual (analog) events, weighted by features (e).
  • Processing of physical objects, weighted by features (f).

Additional adjustments are to be considered for distributed locations and synchronous execution.

Use Cases metrics can also provide a basis for quality checks:

  • Average and maximum number of root use cases relative to business and application domains.
  • Average and maximum number of root use cases relative to primary activities.
  • Average and maximum number of secondary use cases relative to root ones.
  • Average number of primary (for identified) and secondary roles relative to root use cases.
  • Average number primary (aka external) and secondary (issued by use cases) events relative to root use cases.
  • Average and maximum number of primary objects relative to  use cases.
  • The number of variants supported by a use case relative to the total associated with its footprint, respectively for persistent and transient objects.

Requirements Metrics, Contracts, and Acceptance Criteria

As noted above, requirements metrics are a means to a dual end, namely to set a price on developments and define corresponding acceptance criteria.  Moreover, as far are business is concerned, change is the rule of the game. Hence, if many projects can be set on detailed and stable requirements, projects with overlapping concerns and outlying horizons will usually entail changing contexts and priorities. While the former can be contracted out for fixed prices, the latter should clearly allow some room for manoeuvre, and requirements metrics can help to manage that room.

  1. Assuming that projects are initiated from clearly identified business domains or processes, requirements capture should be set against a preliminary wire-frame referencing new or existing pivotal elements of business (1) and system (2) contexts. Those wire-frames (aka anchors, aka backbones) will be used both for circumscribing envelops and managing changes.
  2. Envelops should be defined along architecture layers:  enterprise for business objectives (1), functional for supporting systems (2), and technical for deployment platforms (3). That will set budgets outer limits for given architectural contexts.
  3. Moves within rooms (4) are governed by functional requirements and anchored to pivotal business objects or processed (wire-frame’s nodes) . Their estimations should take into account analogies with previous developments.
  4. Acceptance criteria could then be defined both for invariants (changes are not to overstep architectural constraints) and increments (added functionalities are properly implemented).

Pricing the loops

Requirements Metrics and Agile

While agile development models are meant to be driven by business value, project assessment essentially relies on informed guesses about users stories. That let two questions unanswered:

  • Given that the business value of a story is often defined at process level, how to align it with its local assessment by project team.
  • If problem spaces and solution paths are to be explored iteratively, how to reassess dynamically the stories in backlog.

Answers to both questions are to be helped if requirements are qualified with regard to their nature and footprint.

Informed decision making about what to consider and when clearly depend on the nature of stakes. Hence the importance of a reasoned requirements taxonomy setting apart business requirements, system functionalities, quality of service, and technical constraints on platform implementations.

Dynamic assessment and ranking of backlog elements require some hierarchy and modularity:

  • Architectural options, functional or technical, must be weighted by supported business features and quality of services.
  • Dynamic ranking of users’ stories implies that their value can be mapped to features metrics at the corresponding level of granularity.

That can be achieved with functional features assessed along architecture layers.

Requirements should be assessed with regard to their nature and their footprint in functional architecture.
Requirements should be assessed with regard to their nature and their footprint in functional architecture.

Requirements Metrics and Quality Planning

Finally, requirements metrics may also be used to design quality checks to downstream models. With metrics for intrinsic characteristics of business domains on one hand, system functionalities on the other hand, subsequent models can be assessed for consistency, and alternatives solutions compared.

Providing unified OO modeling concepts, metrics can be computed on analysis and design models and set against their counterpart for original requirements. Examples of standard metrics for UML models include:

  • The number of root packages models is to be compared to symbolic containers for business and application domains, and the sub packages to primary (#) objects or activities.
  • The number of classes in models and packages is to be
  • The number of actors is to be set against the number of roles, with primary (#) roles standing for identified agents.
  • The number of root use cases is supposed to coincide with primary (#) activities and roles.
  • The number of structural inheritance hierarchies should be compatible with the number of frozen and exclusive partitions respectively for objects and activities.

Further Reading

External Links