Focus: Requirements Reuse

Preamble

Requirements is what to feed engineering processes. As such they are to be presented under a wide range of forms, and nothing should be assumed upfront about forms or semantics.

 

What is to be reused: Sketches or Models  ? (John Devlin)

Answering the question of reuse therefore depends on what is to be reused, and for what purpose.

Documentation vs Reuse

Until some analysis can be carried out, requirements are best seen as documents;  whether such documents are to be ephemeral or managed would be decided depending on method (agile or phased), contents (business, supporting systems, implementation, or quality of services), or purpose (e.g governance, regulations, etc).

What is to be reused.

Setting apart external conditions, requirements documentation could be justified by:

  • Traceability of decision-making linking initial requests with actual implementation.
  • Acceptance.
  • Maintenance of deliverables during their life-cycle.

Depending on development approaches, documentation could limited to archives (agile development models) or managed as intermediate products (phased development models). In the latter case reuse would entail some formatting of requirements.

The Cases for Requirements Reuse

Assuming that requirements have been properly formatted, e.g as analysis models (with technical ones managed internally at system level), reuse could be justified by changes in business, functional, or quality of services requirements:

  • Business processes are meant to change with opportunities. With requirements available as analysis models, changes would be more easily managed (a) if they could be fine-grained. Business rules are a clear example, but that could also be the case for new features added to business objects.
  • Functional requirements may change even without change of business ones, e.g if new channels and users are introduced addressing existing business functions. In that case reusable business requirements (b) would dispense with a repeat of business analysis.
  • Finally, quality of service could be affected by operational changes like localization, number of users, volumes, or frequency. Adjusting architecture capabilities would be much easier with functional (c) and business (d) requirements properly documented as analysis models.
Cases for Reuse

Along that perspective, requirements reuse appears to revolve around two pivots, documents and analysis models. Ontologies could be used to bind them.

Requirements & Ontologies

Reusing artifacts means using them in contexts or for purposes different of native ones. That may come by design, when specifications can anticipate on shared concerns, or as an afterthought, when initially unexpected similarities are identified later on. In any case, reuse policies have to overcome a twofold difficulty:

  • Visibility: business and functional analysts must be made aware of potential reuse without having to spend too much time on research.
  • Overheads: ensuring transparency, traceability, and consistency checks on requirements (documents or analysis models) cannot be achieved without costs.

Ontologies could help to achieve greater visibility with acceptable overheads by framing requirements with regard to nature (documents or models) and context:

With regard to nature, the critical distinction is between document management and model based engineering systems. When framed as ontologies, the former is to be implemented as thesaurus targeting terms and documents, the latter as ontologies targeting categories specific to organizations and business domains.

Documents, models, and capabilities should be managed separately

With regard to context the objective should be to manage reusable requirements depending on the kind of jurisdiction and stability of categories, e.g:

  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to accord.
  • Corporate: Defined by enterprises, changes subject to internal decision-making.
  • Social: Defined by usage, volatile, continuous and informal changes.
  • Personal: Customary, defined by named individuals (e.g research paper).
Combining contexts of reuse with architectures layers (enterprise, systems, platforms) and capabilities (Who,What,How, Where, When).

Combined with artificial intelligence, ontology archetypes could crucially extend the benefits of requirements reuse, notably through the impact of deep learning for visibility.

On a broader perspective requirements should be seen as a source of knowledge, and their reuse managed accordingly.

Further Reading

How to Mind a Tree Story

Summary

Depending on devotees or dissenters, the Agile development model is all too often presented as dead-end or end-of-story. Some of that unfortunate situation can be explained, and hopefully pacified, by comparing users’ stories to plants, with their roots, trunks, and branches. Assuming that agility calls for sound footings and good springboards, it may be argued that many problems arise with stories barking at the wrong tree (application level) or getting lost in the woods (architecture level).

How to Mind/Mend a True/Tree Story
How to Mind/Mend a True/Tree Story

Application level: Trees, Bushes and Hedges

As Aristotle first stated, good stories have to follow the three unities: one course of action, located in a single space, run along continuous time.

That rule is clearly satisfied  by stories that can be developed like plants growing from clearly identified roots.

Yet, stories like bushes may grow too many offshoots to be accounted for by a single action narrative; in that case it may be possible to single out a primary trunk and a set of forking branches along which different scenarii could be developed.

More serious difficulties may appear with thickets mixing offshoots from different bushes sharing the same space. That situation will first require some ground work in order to single out individual roots, and then use them to extricate each bush separately. When, like offshoots that actually mingle, story-lines cross and share actions, the description of such actions (aka features) is to be factored out and separated from the contexts of their enactment in the different story-lines.

Finally, like bushes in hedges, stories may chronicle repeated activities serving some collective purpose. That configuration is both easy to recognize and dealt with effectively by introducing a stereotyped story feature for collections and loops management.

Architecture level: Groves, Woods and Plantations

Contrary to hedges which are built on the similarity of their constituents, groves are based on their functional differences, and that can also be seen as a critical distinction between containers and architectures.

In agile parlance, that is best compared to the difference between stories and epics, the former telling what happens between users and applications, the latter taking a bird’s view of the relationships between business processes and systems.

In most of the cases the question will arise for sizable stories deemed too large for development purposes. When dealing with that situation the first step should be to look for thickets and bushes, respectively to be set apart as individual bushes or refined as scenarii. When still confronted with multiple roots, the question would be to decide between hedges and groves, that is between repeated activities and collaboration. And that decision would be critical because collaborations call for a different kind of story (aka epics or themes) set at a higher level, namely architecture.

Scaling Ups and Downs

Assuming the three-units rule cannot be met, two alternative approaches are possible, depending on whether the story has to be broken down or upgraded to an epic, and the undoing of the rule can be used to make a decision:

  1. When the course of actions, once started, is to be contingent on subsequent business (aka external) events the story should be upgraded to an epic, as it will often refer to a part or whole of a business process.
  2. Otherwise: when activities are set along different periods of time (i.e contingent on time-events) the story can be broken down depending on size, functional architecture, or development constraints.
  3. Otherwise: when activities are distributed across locations it may be necessary to factor out architecture-dependent features dealing with shared address spaces and synchronization mechanisms

Applying those guidelines to stories will put the whole development processes on rails and help to align requirements with their architectural footprint: business logic, system functionalities, or platform technologies.

Further Readings

External Links

Modeling Languages: Differences Matter

“Since words are only names for things, it would be more convenient for all men to carry about them such things as were necessary to express a particular business they are to discourse on.”

Jonathan Swift, Gulliver’s Travels

Objective

Modeling languages are meant to support the description of contexts and concerns from specific standpoints. And because those different perspectives have to be mapped against shared references, language must also provide constructs for ironing out variants and factoring out constants.

Digging or Ironing out differences (Erwitt Elliott)
Ironing out differences in features and behaviors (Erwitt Elliott)

Yet, while most languages generally agree on basic definitions of objects and behaviors, many distinctions are ignored or subject to controversial understanding; such shortcomings might be critical when architecture capabilities are concerned:

  • Actual entities and their symbolic counterpart.
  • Actual entities and their roles.
  • Business logic and business operations
  • External events and system time.
Capabs_L1
Architecture Capabilities

As those distinctions set the backbone of functional architectures, languages should be assessed according their ability to express them unambiguously using the least possible set of constructs.

Business objects vs Symbolic Surrogates

As far as symbolic systems are concerned, the primary purpose of models is to distinguish between targets and representations. Hence the first assessment yardstick: modeling languages must support a clear and unambiguous distinction between objects and behaviors of concern on one hand, symbolic system surrogates on the other hand.

cc
A primer on representation: objects of concerns and surrogates

Yet, if objects and behaviors are to be consistently and continuously managed, modeling languages must also provide common constructs mapping identities and structures of actual entities to their symbolic counterpart.

Agents vs Roles

Given that systems are built to support business processes, modeling languages should enable a clear distinction between physical entities able to interact with systems on one hand, roles as defined by enterprise organization on the other hand.

ccc
Agents and Roles

In that case the mapping of actual entities to systems representations is not about identities but functionalities: a level of indirection has to be introduced between enterprise organization and system technology because the same roles can be played,simultaneously or successively, by people, devices, or other systems.

Business logic vs Business operations

Just like actual objects are not to be confused with their symbolic description, modeling languages must make a clear distinction between business logic and processes execution, the former defining how to process symbolic objects and flows, the latter with the coupling between process execution and changes in actual contexts. That distinction is of a particular importance if business and organizational decisions are to be made independently of supporting systems technology.

Pull vs Push Rule Management
Business logic and operational contingencies

Language constructs must also support the consolidation of functional and operational units, the former being defined by integrity constraints on symbolic flows and roles authorizations on objects and operations, the latter taking into account synchronization constraints between the state of actual contexts and the state of their system symbolic counterpart. And for that purpose languages must support the distinction between external and internal events.

External vs Internal Events

Paraphrasing Albert Einstein, time is what happens between events. As a corollary, external time is what happens between context events, and internal time is what happens between system ones. While both can coincide for single systems and locations, no such assumption should be made for distributed systems set in different locations. In that case modeling language should support a clear distinction between external events signalling changes set in actual locations, and internal events signalling changes affecting system surrogates.

Time scales can be design on purpose
Time scales are set by events and concerns

Along that perspective synchronization is to be achieved through the consolidation of time scales. For single locations that can be done using system clocks, across distributed locations the consolidation will entail dedicated time frames and mechanisms set in reference to some initial external event.

Conclusion: How to share differences across perspectives

Somewhat paradoxically, multiple modeling languages erase differences by paring down shared descriptions to some uniform lump that can be understood by all. Conversely, agreeing on a set of distinctions that should be supported by every language could provide an antidote to the Babel syndrome.

That approach can be especially effective for the alignment of enterprise and systems architectures as the four distinctions listed above are equally meaningful in both perspectives.

Further reading

Requirements Capture

Objective

Requirements are not manna from heaven, they do not come to the world as models. So, what is the starting point, the primary input ?  According to John,  “In the beginning was the word …”, but Gabriel García Márquez counters that at the beginning “The world was so recent that many things lacked names, and in order to indicate them it was necessary to point. ”

 

Frog meditating on requirements capture (Sengai)

Requirements capture is the first step along project paths, when neither words nor things can be taken for granted: names may not be adequately fixed to denoted objects or phenomena, and those ones being pointed at may still be anonymous, waiting to be named.

Confronted with lumps of words, assertions and rules, requirements capture may proceed with one of two basic options: organize requirements around already known structuring objects or processes, or listen to user stories and organize requirements alongside. In both cases the objective is to spin words into identified threads (objects, processes, or stories) and weave them into a fabric with clear and non ambiguous motifs.

From Stories to Models

Requirements capture epitomizes the transition from spoken to written languages as its objective is to write down user expectations using modeling languages. Just like languages in general, such transitions can be achieved through either alphabetical of logographic writing systems, the former mapping sounds (phonemes) to signs (glyphs), the latter setting out from words and mapping them to symbols associated with archetypal meanings; and that is precisely what models are supposed to do.

kanji_rekap
Documented communication makes room for mediation

As demonstrated by Kanji, logographic writing systems can support different spoken languages providing they share some cultural background. That is more or less what is at stake with requirements capture: tapping requirements from various specific domains and transform them into functional requirements describing how systems are expected to support business processes. System functionalities being a well circumscribed and homogeneous background, a modeling framework supporting requirements capture shouldn’t be out of reach.

Getting the right stories

If requirements are meant to express actual business concerns grounded in the here and now of operations, trying to apprehend them directly as “conceptual” models would negate the rationale supporting requirements capture. User stories and use cases help to prevent such misgivings by rooting requirements in concrete business backgrounds of shared references and meanings.

kanji_whorse
Requirements capture should never flight to otherworldly expectations

Yet, since the aim of requirements is to define how system functionalities will support business processes, it would help to get the stories and cases right upfront, in other words to organize them according patterns of functionalities. Taking a cue from the Gang of Four, three basic categories should be considered:

  • Creational cases or stories deal with the structure and semantics of business objects whose integrity and consistency has to be persistently maintained independently of activities using them. They will govern objects life-cycle (create and delete operations) and identification mechanisms (fetch operations).
  • Structural cases or stories deal with the structure and semantics of transient objects whose integrity and consistency has to be maintained while in use by activities. They will govern features (read and update operations) and target aspects and activities rooted (aka identified) through primary objects or processes.
  • Behavioral cases or stories deal with the ways objects are processed.
RR_MYW
Products and Usage are two different things

Not by chance, those categories are consistent with the Object/Aspects perspective that distinguish between identities and objects life-cycle on one hand, features and facets on the other hand. They are also congruent with the persistent (non-transactional)/transient (transactional) distinction, and may also be mapped to CRUD matrices.

Since cases and stories will often combine two or three basic categories, they should be structured accordingly and reorganized as to coincide with the responsibilities on domains and projects defined by stakeholders.

User Stories vs Use Cases

Other than requirements templates, user stories and use cases are two of the preferred methods for capture requirements. Both put the focus on user experience and non formal descriptions, with use cases focusing at once on interactions between agents and systems, and user stories introducing them along the course of refinements. That make them complementary:

  • Use cases should be the method of choice when new functionalities are to be added to existing systems.
  • User stories would be more suited to standalone applications but may also be helpful to single out use cases success scenarii.

Depending on circumstances it may be easier to begin requirements capture with a success story (green lines) and its variants or with use cases (red characters) with some activities already defined.

ExecPathsStory
User Stories vs Use Cases

Combining user stories and use cases for requirement capture may also put the focus on system footprint, setting apart the activities to be supported by the system under consideration. On a broader perspective, that may help to position requirements along architecture layers: user stories arise from business processes  set within enterprise architecture, use cases are supported by functional architecture.

Spinning the Stories

Given that the aim of requirements is to define how systems will support processes execution and objects persistency, a sound policy should be to characterize those anchors meant to be targeted by requirements nouns and verbs. That may be achieved with basic parsing procedures:

  • Nouns and verbs are set apart and associated to candidates archetypes for physical or symbolic object, physical or symbolic activity, corresponding container, event, or role.
  • Among them business concerns should point to managed individuals, i.e those anchors whose instances must be consistently identified by business processes.
  • Finally business rules will be used to define features whose values are to be managed at instances level.
Spinning words into archetypes

Parsing nondescript requirements for anchors will set apart a distinctive backbone of clear and straight threads on one hand, a remainder of rough and tousled features and rules on the other hand.

Fleshing the Stories out

Archetypes are like clichés, they may support a story but cannot make it. So it goes with requirements whose meaning is to be found into the intricacy of features and business rules.

However tangled and poorly formulated, rules provide the substance of requirements as they express the primary constraints, needs and purposes. That jumble can usually be reshaped in different ways depending on perspective (business or functional requirements),  timing constraints (synchronous or asynchronous) or architectural contexts; as a corollary, the way rules are expressed will have a significant impact on the functional architecture of the system under consideration.

If transparency and traceability of functional arbitrages are to be supported, the configuration of rules has to be rationalized from requirements inception. Just like figures of speech help oral storytelling, rules archetypes may help to sort out syntax from semantics, the former tied to rules themselves, the latter derived from their targets. For instance, constraints on occurrences (#), collections (*) or partitions (2) should be expressed uniformly whatever their target: objects, activities, roles, or events.

PtrnRules_Incept
From rules syntax to requirements semantics
As a consequence, and to all intents and purposes, rules analysis should not only govern requirements capture, it should also shadow iterations of requirements analysis, each cycle circumscribed by the consolidation of anchors:
  • Single responsibility for rule implementation: project, architecture or services, users.
  • Category: whether a rule is about life-cycle, structure, or behavior.
  • Scope: whether enforcement is transient of persistent.
  • Coupling: rules triggered by, or bringing change to, contexts must be set apart.
  • Control: whether enforcement has to be monitored in real-time.
  • Power-types and extension points: all variants should be explicitly associated to a classification or a branching rule.
  • Subsidiarity: rules ought to be handled at the lowest level possible: system, domain, collection, component, feature.

Pricing the Stories

One of the primary objectives of requirements is to put a price on the system under consideration and to assess its return on investment (ROI). If that is straightforward for hardware and off-the-shelf components, things are not so easy for software developments whose metrics are often either pragmatic but specific, or  inclusive but unreliable.

Putting aside approaches based on programs size, both irrelevant for requirements assessment and discredited as development metrics, requirements can be assessed using story or function points:

  • Story points conduct pragmatic assessments of self-contained stories. They are mostly used locally by project teams to estimate their tasks and effort.
  • Functional metrics are more inclusive as based on principled assessment of archetypal system functionalities. Yet they are mostly confined to large organizations and their effectiveness and reliability highly depends on expertise.

Whereas both approaches start with user expectations regarding system support, their rationale is different: function points (FPs) apply to use cases and take into account the functionalities supported by the system; story points (SPs) apply to user stories and their scope is by definition circumscribed. That difference may be critical when categories are considered: points for behavioral, structural and creational stories should be weighted differently.

Yet, when requirements capture is supported both by stories and use cases, story and functions points can be combined to obtain functional size measurements:

  • Story points are used to assess business contents (aka application domain) based on master data (aka persistent) entities, activities, and their respective power-types.
  • Use case points target the part played by the system, based on roles and coupling constraints defined by active objects, events, and controlling processes.
  • Function Points as Use Case Points weighted by Story Points

Non adjusted function points can then be computed by weighting use case function points with the application domain function points corresponding to use case footprint.

Further Reading

Objects with Attitudes

Identities and Aspects

Despite its object and unified vocations, the OMG’s UML (Unified Modeling Language) has been sitting uneasily between scopes (e.g requirements, analysis, and design), as well as between concepts (e.g objects, aspects, and domains).

Where to look  for AAA issues (Maurizio Cattelan)

Those misgivings probably go a long way to explain the limited, fragmented, and shallow footprint of UML despite its clear merits. Hence the benefits to be expected from a comprehensive and consistent approach of object-oriented modeling based upon two classic distinctions:

  • Business vs System: assuming that systems are designed to manage symbolic representations of business objects and processes, models should keep the distinction between business and system objects descriptions.
  • Identity vs behavior: while business objects and their system counterpart must be identified uniformly, that’s not the case for aspects of symbolic representations which can be specified independently.

That two-pronged approach bridges the gap between analysis and design models, bringing about a unified perspective for concepts (objects and aspects) as well as scope (business objects and system counterparts).

Object Oriented Modeling

Object Oriented and Relational approaches are arguably the two main advances of software engineering for the last 50 years. Yet, while the latter is supported by a fully defined theoretical model, the former still mostly stands on the programming languages supporting it. That is somewhat disappointing considering the aims of the Object Management Group (OMG),

UML was born out of the merge of three modeling methods: the Booch method, the Object-modeling technique (OMT) and Object-oriented software engineering (OOSE), all strongly marked by object orientation. Yet, from inception, the semantics of objects were not clearly defined, when not explicitly confused under the label “Object Oriented Analysis and Design” (OOA/D). In other words, the mapping of business contexts to system objects, a critical modeling step if there is any, has been swept under the carpet.

Literal Bird

That’s a lose/lose situation. Downstream, OO approaches, while widely accepted at design level, remain fragmented due to the absence of a consensus regarding object semantics outside programming languages. Upstream, requirements are left estranged from engineering processes, either forcing analysts to a leap of faith over an uncharted no man’s land, or to let business objects being chewed up by programming constructs.

Domains and Images

In mathematics, an image is the outcome of a function mapping its source domain to its target co-domain. Applied to object-oriented modeling, the problem is to translate business objects to their counterpart as system components. For that purpose one needs to:

  1. Define domains as sets of business objects and activities whose semantics and life-cycle are under the authority of a single organizational unit.
  2. Identify the objects and phenomena whose representation has to be managed, as well as the lifespan of those representations.
  3. Define the features (attributes or operations) to be associated to system objects.
  4. Define the software artifacts to be used to manage the representations and implement the features.
From Business Domain to System Image

While some of those objectives can be set on familiar grounds, the four must be reset into a new perspective.

Business Objects are rooted in Concerns

Physical or symbolic, objects and activities are set by concerns. Some may be local to enterprises, some defined by common business activities, and some set along a broader social perspective. The first step is therefore to identify the organizational units responsible for domains, objects identities and semantics:

  • Domains in charge of identities will govern objects life-cycle (create and delete operations) and identification mechanisms (fetch operations). That would target objects, agents, events and processes identified independently of systems.
  • Domains in charge of semantics will define objects features (read and update operations). That would target aspects and activities rooted (aka identified) through primary objects or processes.
Context anchors and associated roles and activities

It must be noted that whereas the former are defined as concrete sets of identified instances governed by unique domains, the latter may be defined independently of the objects supporting them, and therefore may be governed by overlapping concerns set by different domains.

Objects and Architectures

Not by chance, the distinction between identities and features has an architectural equivalent. Just like buildings, systems are made of supporting structures and subordinate constructs, the former intrinsic and permanent, the latter contingent and temporary. Common sense should therefore dictate a clear distinction between modeling levels, and put the focus on architectures:

  • Enterprise architecture deals with objectives, assets and organization associated with the continuity of corporate identity and business capabilities within a given regulatory and market environment. That is where domains, objects and activities are identified and defined.
  • Functional architecture deals with the continuity of systems functionalities as they support the continuity of business memory and operations. At this level the focus is not on business objects or activities but on functions supported by the system: communication, control, persistency, and processing.
  • Technical architecture deals with the feasibility, efficiency and economics of systems operations. That is where the software artifacts supporting the functions are designed .
Objects and Architectures

Objects provide the hinges binding architectural layers, and models should therefore ensure direct and transparent mapping between business objects, functional entities, and system components. That’s not the case for features whose specification and implementation can and should be managed separately.

Fleshing out Objects with Semantic Aspects

Confusing business contexts with their system counterparts leads to mistaken equivalence between features respectively supported by business objects and system artifacts:

  • The state of physical objects may be captured or modified through specific interfaces and persistently recorded by symbolic representations, possibly with associated operations.
  • Non physical (notional) business objects are identified and persistently recorded as such. Their state may also appear as transient objects associated with execution states and processing rules.
  • Events have no life-cycle and therefore don’t have to be identified on their own. Their value is obtained through interfaces; associated messages can be used by control or processing functions; values can be recorded persistently. Since the value of past events is not meant to be modified operations are irrelevant except for interfaces.
  • Actual processes are identified by execution context and timing. There state may be queried through interfaces and recorded, but persistent records cannot be directly modified.
  • Symbolic processes are identified by footprint independently of actual execution. Their execution may be called through interfaces and the results may be recorded, but persistent records cannot be directly modified.
  • External roles are identified by the interfaces supporting the interactions. Their activity may be recorded, but persistent records cannot be directly modified.
Fleshing out Aspects

By introducing complementary levels of indirection between business and system objects on one hand, identities and features on the other hand, the proposed approach significantly further object-oriented modeling from requirements analysis to system design. Moreover, this approach provides a robust and effective basis for the federation of business domains, by modeling separately identities and semantic features while bridging across conceptual, logical and physical information models.

Untangling Business Rules

However tangled and poorly formulated, rules provide the substance of requirements as they express the primary constraints, needs and purposes. That jumble can usually be reshaped differently depending on perspective (business or functional requirements),  timing constraints (synchronous or asynchronous) or architectural contexts; as a corollary, the way rules are expressed will have a significant impact on the functional architecture of the system under consideration. Hence, if transparency and traceability of functional and technical arbitrages are to be supported, the configuration of rules has to be rationalized from requirements inception. And that can be achieved if rules can be organized depending on their footprint: domains,  instances, or attitudes.

From Objects to Artifacts

Requirements analysis is about functional architecture and business semantics, design is about software artifacts used to build system components. The former starts with concrete descriptions and winds up with abstract ones, the latter takes over the abstractions and devise their concrete implementation.

Uphill to functionalities, downhill to implementations

Somewhat counter-intuitively, information processing is very concrete as it is governed by actual concerns set from biased standpoints. Hence, trying to abstract requirements of supporting systems up to some conceptual level is a one way ticket to misunderstandings because information flows are rooted in the “Here and Now” of business concerns. Abstract (aka conceptual) descriptions are the outcome of requirements analysis, introduced when system symbolic representations are consolidated across business domains and processes.

Starting with a concrete description of identified objects and processes, partitions are used to analyze the variants and select those bound to identities. Inheritance hierarchies can organized accordingly, for objects or aspects.

Inheritance of identities vs inheritance of aspects.

While based on well understood concepts, the distinction between identity and aspect inheritance provides a principled object-oriented bridge between requirements and models free of any assumption regarding programming language semantics for abstract classes or inheritance.

Objects, attitudes, and Programming Languages

Because object-oriented approaches often stem from programming languages, their use for analysis and design is hampered by some lack of consensus and a few irrelevant concepts. That is best illustrated by two constructs, abstract classes and interfaces.

  • Most programming languages define abstract classes as partial descriptions and, as a result, the impossibility to be instantiated. When applied to business objects the argument is turned around, with the consequence, no instance, taken as the definition.
  • Interfaces are also a common features of object-oriented languages, but not only, as they may be used to describe the behavior of any software component.

Those distinctions can be settled when set within a broader understanding of objects and aspects, the former associated with identified instances with bound structures, eventually implemented as concrete classes, the latter with functionalities, eventually implemented as abstract classes or interfaces.

From Analysis to Design

A pivotal benefit of distinguishing between objects identity and aspects is to open a bridge between analysis and design by unifying respective patterns along object-oriented perspective. Taking a cue from the Gang of Four, system functionalities could be organized along three basic pattern categories:

  • Creational functionalities deal with the life-cycle (create and delete operations) and identification mechanisms (fetch operations)  of business objects whose integrity and consistency has to be persistently maintained independently of activities using them.
  • Structural functionalities deal with the structure and semantics of transient objects whose integrity and consistency has to be maintained while in use by activities. They will govern features (read and update operations) and target aspects and activities rooted (aka identified) through primary objects or processes.
  • Behavioral functionalities deal with the ways objects are processed.

Mapping analysis patterns to design ones will greatly enhance models traceability; moreover, taking advantage of the relative maturity of design patterns, it may also boost quality across model layers as well as the whole effectiveness of model driven engineering solutions.

Objects Oriented Modeling and Model Driven Engineering

The double distinction between contexts and systems on one hand, objects and aspects on the other hand, should help to clarify the contents of modeling layers as defined by OMG’s model driven architecture (MDA):

  • Computation independent models (CIMs) are structured around business objects and processes identified on their own, associated with organizational details for roles and activities.
  • Platform independent models (PIMs) are organized on two levels, one for functional architectures (boundaries, processes, persistency, services, communication), the other one for associated aspects.
  • Platform specific models (PSMs) are similarly designed on two levels, one mapping functional architectures, the other one implementing aspects.
MDA with UML#

Using  UML#, object-oriented concepts can therefore be applied uniformly from requirements to design without forcing programming semantics into models.

Further Readings

2012: Ahead with the New Year

New Grounds or New Holes ?

New years bring new perspectives, but looking ahead is useless without a sound footing. These plain figures may shed some light on the matter.

vvvv
Looking ahead with hindsight (Where to look  for AAA issues (Maurizio Cattelan)

What: Requirements and Models

Projects should start with some agreement about expectations and commitments. Maturity on that regard can be estimated with:

  • Number of projects started on agreed (actual meeting between stakeholders and providers) requirements, relative to all started developments.
  • Number of agreed requirements as sanctioned by models, relative to all requirements.
  • Number of agreed requirements that included quality plans, relative to all agreed requirements.
  • Number of root artifacts linked to requirements items relative to all root artifacts.
  • Number of requirements items linked to root artifacts relative to all requirements items.

The critical point here is the traceability between rough requirements as initially expressed, and structured and non ambiguous ones agreed upon after analysis.

Who: Stakeholders, Users, Developers

If their maturity is tobe assessed and improved, engineering projects should clearly distinguish between roles, even when they are played by the same persons or in tight collaboration. Here some clues to find out what happens:

  • Planned meetings with differentiated positions relative to all planned meetings.
  • Decision making meetings relative to all planned meetings.
  • Non functional agreed requirements relative to all agreed requirements.
  • Changes in agreed requirements linked to decision makers relative to all changes in agreed requirements.

The focus here should be on the definition of domains and use cases on one hand, traceability on the other hand.

When: Planning

As almost every human endeavour, projects’ success is governed by time and resources, in that case the delivery of system functionalities on time and on budget. On that regard, process maturity assessment should start with:

  • Number of projects not deployed relative to projects started on agreed requirements
  • Time spent in decision-making meetings relative to total project time.
  • Actual resources relative to estimations after agreed requirements.
  • Elapsed time between applications ready to be deployed and actually operational relative to projects duration.

The critical factors here are the traceability of model contents and the mapping of development flows into work units.

How: Tools

Engineering processes are meant to be supported by tools but that’s not necessarily for the best. A rough diagnostic can be based upon:

  • Number of tools installed relative to the number of functions supported by those tools.
  • Number of tools installed during the last year relative to the number of  tools installed.
  • Number of exchanges operated between tools relative to the number of  tools installed.

Further assessment should be set within the MDA/MDE perspective according model transformation policies.

Models, Architectures, Perspectives (MAPs)

What You See Is Not What You Get

Models are representations and as such they are necessarily set in perspective and marked out by concerns.

Model, Perspective, Concern (R. Doisneau).
  • Depending on perspective, models will encompass whole contexts (symbolic, mechanic, and human components), information systems (functional components), software (components implementation).
  • Depending on concerns models will take into account responsibilities (enterprise architecture), functionalities (functional architecture), and operations (technical architecture).

While it may be a sensible aim, perspectives and concerns are not necessarily congruent as responsibilities or functionalities may cross perspectives (e.g support units), and perspectives may mix concerns (e.g legacies and migrations). That conundrum may be resolved by a clear distinction between descriptive and prescriptive models, the former dealing with the problem at hand, the latter with the corresponding solutions, respectively for business, system functionalities, and system implementation.

Models as Knowledge

Assuming that systems are built to manage symbolic representations of business domains and operations, models are best understood as knowledge, as defined by the pivotal article of Davis, Shrobe, and Szolovits:

  1. Surrogate: models provide the description of symbolic objects standing as counterparts of managed business objects and activities.
  2. Ontological commitments: models include statements about the categories of things that may exist in the domain under consideration.
  3. Fragmentary theory of intelligent reasoning: models include statements of what the things can do or can be done with.
  4. Medium for efficient computation: making models understandable by computers is a necessary step for any learning curve.
  5. Medium for human expression: models are meant to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.
Representation_Mutilation
Surrogates without Ontological Commitment

What You Think Is What You Get

Whereas conventional engineering has to deal with physical artifacts, software engineering has only symbolic ones to consider. As a consequence, design models can be processed into products without any physical impediments: “What You Think Is What You Get.”

RR_MYW
Products and Usage are two different things

Yet even well designed products are not necessarily used as expected, especially if organizational and business contexts have changed since requirements capture.

Models and Architectures

Models are partial or complete descriptions of existing or intended systems. Given that systems will eventually be implemented by software components, models and programs may overlap or even be congruent in case of systems made exclusively of software components. Moreover, legacy systems are likely to get along together with models and software components. Such cohabitation calls for some common roof, supported by shared architectures:

  • Enterprise architecture deals with the continuity of business concerns.
  • System architecture deals with the continuity of systems functionalities.
  • Technical architecture  deals with the continuity of systems implementations.

That distinction can also be applied to engineering problems and solutions: business (>enterprise), organization (supporting systems), and development (implementations).

ADSM_PbsSolsArch
Problems and solutions must be set along architecture layers

On that basis the aim of analysis is to define the relationship between business processes and supporting systems, and the aim of design is to do the same between system functionalities and components implementation.

Whatever the terminology (layers or levels), what is at stake is the alignment of two basic scales:

  • Architectures: enterprise (concepts), systems (functionalities), and platforms (technologies).
  • Models: conceptual (business context and organization), analysis (symbolic representations), design (physical implementation).
Dial M for Models

If systems could be developed along a “fire and forget” procedure models would be used only once. Since that’s not usually the case bridges between business contexts and supporting systems cannot be burned; models must be built and maintained both for business and system architectures, and the semantics of modeling languages defined accordingly.

Languages, Concerns, Perspectives

Apart for trivial or standalone applications, engineering processes will involve several parties whose collaboration along time will call for sound languages. Programming languages are meant to be executed by symbolic devices, business languages (e.g B.P.M.) are meant to describe business processes, and modeling languages (e.g UML) stand somewhere in-between.

As far as system engineering is concerned, modeling languages have two main purposes: (1) describe what is expected from the system under consideration, and (2) specify how it should be built. Clearly, the former belongs to the business perspective and must be expressed with its specific words, while the latter can use some “unified” language common to system designers.

The Unified Modeling Language (UML) is the outcome of the collaboration between James Rumbaugh with his Object-modeling technique (OMT), Grady Booch, with his eponymous method, and Ivar Jacobson, creator of the object-oriented software engineering (OOSE) method.

Whereas UML has been accepted as the primary standard since 1995, it’s scope remains limited and its use shallow. Moreover, when UML is effectively used, it is often for the implementation of Domain Specific Languages based upon its stereotype and profile extensions. Given the broadly recognized merits of core UML constructs, and the lack of alternative solutions, such a scant diffusion cannot be fully or even readily explained by subordinate factors. A more likely pivotal factor may be the way UML is used, in particular in the confusion between perspectives and concerns.

Perspectives and Concerns: business, functionalities, implementation

Languages are useless without pragmatics which, for modeling ones means some methodology defining what is to be modeled, how, by who, and when. Like pragmatics, methods are diverse, each bringing its own priorities and background, be it modeling concepts (e.g OOA/D), procedures (e.g RUP), or collaboration agile principles (e.g Scrum). As it happens, none deals explicitly with the pivotal challenges of the modeling process, namely: perspective (what is modeled), and concern (whose purpose).

In order to meet those challenges the objective of the Caminao framework is to provide compass and signposts for road-maps using stereotyped UML constructs.

Models, Architectures, Perspectives (MAPs)

From a general perspective, and beyond lexical controversies, models and architectures should be defined along two parallel scales:

  • Architectures: enterprise (concepts), systems (functionalities), and platforms (technologies).
  • Models: conceptual (business context and organization), analysis (symbolic representations), design (physical implementation).

Caminao maps add perspectives:

  • Models set the stages, where targeted artifacts are defined depending on concerns.
  • Topography put objects into perspective as set by stakeholders situation: business objectives, system functionalities, system implementation.
  • Concerns and perspectives must be put into context as defined by enterprise, functional or technical architectures.

The aim of those maps is to support project planning and process assessment:

  • Perspective and concerns: what is at stake, who’s in charge.
  • Milestones: are expectations and commitments set across different organizational units.
  • Planning: development flows are defined between milestones and work units set accordingly.
  • Tasks traceability to outcomes and objective functional metrics provide for sound project assessment.
  • Processes can be designed, assessed and improved by matching  development patterns with development strategies.

Matching Concerns and Perspectives

As famously explained by Douglas Hofstadter’s Eternal Golden Braid, models cannot be proven true, only to be consistent or disproved.

Depending on language, internal consistency can be checked through reviews (natural language) or using automated tools (formal languages).

Refutation for its part entails checks on external consistency, in other words matching models and concerns across perspectives. For that purpose modeling stations must target well defined sets of identified objects or phenomena and use clear and non ambiguous semantics. A simplified (yet versatile), modeling cycle could therefore be exemplified as follows:

  1. Identify a milestone  relative to perspective, concern, and architecture.
  2. Select anchors (objects or activities).
  3. Add connectors and features.
  4. Check model for internal consistency.
  5. Check model for external consistency, e.g refutation by counter examples.
  6. Iterate from 2.

Further Reading

External Links

Requirements Rounds up

Principles

Whereas it is based upon well known concepts and accepted standards, the Caminao approach entails a new modeling perspective which calls for change of habits, mostly at requirements level. The objective here is to experiment some Proof of  Concept by contriving requirements on-line along the Caminao path.

Jeff_Wall-noticias
Collecting Requirements (Jeff Wall)

For that purpose the proposed experiment makes use of four principles:

  • Crowd-sourcing: except for Caminao stereotypes, understanding do not come from a special expertise or best-practices but is built on collective wisdom.
  • Iterations: stakeholders and analysts are circling topics until they agree on clear and unambiguous pictures.
  • Illustrations: requirements begin as expectations, as such they should be best captured through pictures before being analyzed through models.
  • Assertions: requirements are meant to translate into commitments, hence, associated models should be settled by explicit constraints and expressions.
Requirements loops: from expectations to commitments.

On that basis stakeholders will introduce their requirements as illustrations, analysts will try to translate them into models which, after being accepted by stakeholders,  will subsequently be decorated by assertions.

Modus Operandi

  • Requirements rings are managed through the  G+ Caminao Rings page.
  • In order to submit a project, candidate stakeholders must belong to the circle of fellows.
  • Fellows stakeholders may submit projects by creating their own G+ pages and circles and identifying them with the Caminao G+.  A new page is created for each project, to be matched with the fellow G+ circle.
  • Fellow analysts propose models capturing all or parts of illustrated requirements.
  • Stakeholders may accept, reject, or hold back their decision. Refusals can be commented but reservations can only be qualified with additional illustrations.
  • Once approved models may subsequently be fleshed out with expressions, constraints and rules.

Models

Fellow analysts can propose two types of models:

  • Horizontal models describe individual artifacts and their connections.
  • Vertical models are anchored to single artifacts and focus on their partitions and inheritance relationships.

While it’s recommended to walk along basic UML conventions, models may include any kind of artifacts providing they are qualified by Caminao stereotypes for actual or symbolic objects and activities, roles, or events.

UMLSharp_Stereos
Caminao stereotypes for nodes

Stereotypes for containers use the same principle for organizational units (110) physical locations (121), physical executions (141), business domains (120), business activities (140).

The semantics of connectors (association, flow, transition, or channel) can remain implicit and defined by context. They may be stereotyped using standard set operators.

Set-based stereotypes for Connectors

By convention, objects, events and roles are labelled with singular nouns, activities use infinitive verbs, and processes use present progressive ones. Containers are named with plurals.

Who’s in the Loops

Four types of players may appear in requirements loops:

  • Stakeholders (one by project) set the context and objectives with pictures, photos or drawings. Textual descriptions are not allowed. Stakeholders accept or reject artifacts.
  • Users and business analysts add to the stakeholder requirements using the same media (no texts); they also may qualify model artifacts with formal expressions, constraints or rules.
  • System analysts suggest artifacts.
  • Architects (one by project) accept or reject qualifications on artifacts.
Who’s in the Loop

Mind Your Words

Language and meanings may be baffling bedfellows, as what is said is not necessarily what is meant. As a boost to requirements transparency, a simple gizmo may be used  by players to speak their mind, for their interlocutor (talking bubble) or only for the audience (thinking bubble).

Say What You Mean, Mean What You Say

Price Your Words

Assuming clear understanding and good faith, customers and providers must agree on a price, and for that purpose they must align their respective expectations and commitments.

Expecting to take advantage of business opportunities at a given time, customers define system requirement along a black box perspective; in return, providers analyze those requirements along a white box perspective and make an estimate of cost and duration. Their respective expectations are consolidated and commitments made, customers regarding payment, provider regarding delivery.

As far as customers are concerned, success is measured by the return on investment (ROI), which depends on cost, quality, and timely delivery. Providers for their part will design the solution, develop the components, and integrate them into targeted environments. Narrowly defined, their success will be measured by costs. Those concerns may be played along a non-zero sum game:

  • Customers assess the benefits (a) to be expected from the functionalities under consideration (b).
  • Providers consider the solutions (a) and estimate their costs (b).
  • Customers and providers agree on functionalities, costs and schedules (c).
Matching respective expectations and commitments of customers and providers.

Hence, while stakes are clearly conflicting on costs, there is room for collaboration on quality and timing, and that will bring benefits to both customers and providers.

Square the Rings

Even for standalone applications, it’s safe to assume that requirements will have to take into account external factors and constraints. Since those requirements will usually be managed by different organizational units, they must be sorted out upfront:

  • Non functional constraints deal with performances and resources.
  • Cross functional requirements deal with system functionalities shared by different business processes.
  • Application specific requirements deal with system functionalities supporting a single business process.
Squaring requirements rings

Those rings are used to organize projects according the requirements architectural footprint and associated responsibilities.

UML# Manifesto

Objective

Taking a cue from Ivar Jacobson (“The road ahead for UML“), some modularity should be introduced in order to facilitate the use of UML in different contexts, organizational, methodological, or operational.

Avery_Singer2
“Charpente” is french for supporting structure (Avery Singer)

 

Three main overlapping objectives should be taken into consideration:

  • Complexity levels: the language features should be regrouped into clearly defined subsets corresponding to levels of description. That would open the way to leaner and fitter models.
  • Model layers: the language constructs should be re-organized along MDA layers if models are to map stakeholder concerns regarding business requirements, system functionalities, and system design.
  • Specificity: principled guidelines are needed for stereotypes and profiles in order to keep the distinction between specific contents and “unified” ones, the former set with limited scope and visibility, the latter meant to be used across model layers and/or organizational units.

As it happens, a subset of constructs centered on functional architecture may well meet those three objectives as  it will separate supporting structures (“charpentes” in french) from features whose specifications have no consequences on system architectures.

State of The Art

Information technologies progress crab-like, the hardware leg striding forward with Moore Law, and the software leg crawling on practices, with rare real leaps like relational (more than 30 years ago) and object (10 years later) technologies. The adoption of UML as a modeling standard at the end of the last century could have spurred a new start of innovation for software engineering; instead, its protracted diffusion and shallow or biased usage suggest a latent factor behind the limping progresses of software technologies. The absence of clear advances for the last 20 years, despite the undisputed soundness and cogency of available concepts and tools, may hint at some basic focusing error regarding what is to be considered.

Whereas object oriented approaches to software analysis and design are now broadly accepted, there is no consensus regarding the nature of targeted objects; more precisely, there is no explicit distinction between symbolic objects and processes on one hand, and their operational counterparts on the other hand. Such a confusion can be observed at different levels:

  • Requirements: the perspective is limited to a dual distinction between problem (what) and solution (how) spaces. That approach is much too simplistic as it overlooks the functional dimension, namely how a software solution (nothing more than a piece of code) is to be used within operational systems (a distributed set of agents, devices, and symbolic machines).
  • Time-scales: when models are used (that’s not always the case), operational contexts (business objects and processes) are captured at inception time and their description frozen as snapshots until further notice. This lack of synchronized context and system models rules out any explicit management of systems life-cycles and transformation.
  • Model contents: the aim of model driven (or based) approaches is to define development flows in terms of models and organize engineering processes accordingly. But that may not be possible if the semantics of model contents are not properly differentiated between modeling stages, the consequence being a lack of principled support for model transformation. That point is best illustrated by the perplexing debate about executable models and models as code.
  • Architectures: blurred focus on systems and contexts necessarily entails a confusion between enterprise, functional, and technical architectures. That confusion is proving to be critical when service oriented architectures (SOA) are considered.
  • Processes: the aim of development processes is to manage concerns set by different stakeholders, based upon different rationales, and subject to changes along different time-frames. If those concerns cannot be specified independently, the design and assessment of engineering processes will lack a sound basis.

Yet, since all those problems stem from the same blurred focus, they may also be deal with a shift in modeling paradigm. Moreover, that shift could be especially productive given the availability of the conceptual constructs associated with object oriented approaches and UML. For that purpose it is necessary to clarify model purposes and customize the language accordingly.

Two Legs and a Bridge

From a general perspective, and beyond lexical controversies, models and architectures should be defined along two parallel scales:

  • Architectures: enterprise (concepts), systems (functionalities), and platforms (technologies).
  • Models: conceptual (business context and organization), analysis (symbolic representations), design (physical implementation).

Caminao is focused on models as bridges between business goals and system implementation. While simple or standalone applications may often be developed without mediation, that’s not the case for shared applications deployed across distributed system with independent life-cycles.

  • Enterprise architecture: business requirements are not necessarily expressed with modelling languages.
  • Functional architecture: functional (aka system) requirements should be expressed with modelling languages.
  • Technical architecture: non functional requirements are not expressed with modelling languages.
Dial M for Models

Grammars and Semantics

Whatever the editor, graphical or otherwise, models are built from expressions according to grammatical rules. Some rules are meaningless as they only define what is possible, i.e how to form correct expressions; others are mixed as the associated constructs may also convey some meaning.

Expressions are not necessarily textual: this one means “Sign Language”

In any case, those rules have to deal with three types of considerations: lexical, syntactic, and semantics.

  1. Legitimate items, words or symbols, are defined by the lexical layer.
  2. The way lexical items can be used to build well-formed expressions is defined by syntactic rules.
  3. Finally, language semantics define the relationships between expressions and targeted contexts.

The Unified Modeling Language (UML) is the outcome of the collaboration between James Rumbaugh with his Object-modelling technique (OMT), Grady Booch, with his eponymous method, and Ivar Jacobson, creator of the object-oriented software engineering (OOSE) method. Whereas the Three Amigos did a pretty good job in consolidating the best of different approaches, UML 1.x was not built from scratch along the ideal template: instead of drawing clear lines between lexical, syntactic, and semantics layers, constructs were first established along semantic perspectives, namely static (aka structural) and dynamic (aka behavioral) views. Formal syntactic definitions of qualifiers and expressions came almost as an afterthought.

As is often the case when ambitious objectives set by visionaries are taken over by committees, things certainly didn’t improve with UML 2.x. Having to face a growing complexity without a solid grammatical ground, the OMG turned simultaneously to abstraction and specificity: on one hand Meta-Object Facility (MOF) is meant to bypass classical grammars descriptions by bringing every modeling language under a single meta-language roof; on the other hand stereotypes and profiles provide a very practical way to tailor UML to specific domains and organizations.

But those options have driven UML into opposing directions: on one hand meta-models do nothing to improve UML capabilities and usability as their aim is mainly to enable some interoperability between tools providers; on the other hand stereotypes and profiles push UML users into specific corners, which is not what a “unified” modelling language is supposed to do. As a consequence, more than 15 years since it became a standard, UML is still not widely accepted as such; and when it is used, it is all too often on a very limited scope, or for very specific purpose, or without much of methodology.

Lean, fit, and Sharp, as in “Charpente”

The proposed approach takes advantage of UML stereotyping mechanism to reorganize the semantics around a kernel of simple syntactic constructs dealing with:

Syntactic Constructs for Charpente Descriptions

All structures (nodes, connectors and features) are to be described uniformly with a single set of standard operators.

The semantics layer is to use a finite set of stereotypes to characterize:

  • Artifacts life-cycle, transient or persistent.
  • Artifacts nature, actual or symbolic.
  • Identification mechanisms: standalone, dependent, joint, continuous.
Charpente semantics

Traceability with “Charpentes”

Finally, semantics are needed to support built-in traceability of dependencies across models or along development cycles.

Built-in Traceability

Those dependencies must be managed depending on their scope and nature.

Regarding scope, one must distinguish between dependencies involving operational contexts and those limited to symbolic artifacts.

Modeling dependencies stem from requirements and refer to both operational contexts and their symbolic representations:

  • Representation dependencies between business objects or behaviors and their symbolic counterparts.
  • Structural dependencies between roots and members.
  • Functional dependencies between connected nodes.

Development dependencies arise from modeling decisions and refer solely to symbolic artifacts:

  • Ownership dependencies between address spaces and artifacts.
  • Usage dependencies through reused features, either shared or inherited.
  • Abstraction dependencies

Regarding the nature of dependencies, a clear-cut distinction must be maintained between those rooted to decisions and those born from necessity.

By-products

Taking inspiration from the Capability Maturity Model Integration (CMMI), the benefits of UML# and, more broadly, from architecture driven modelling can be identified for product, project, and process areas:

  • Traceability is obviously a starting point as it is a prerequisite for streamlined engineering (product), portfolio and risk management (project), and application life-cycle management (process).
  • Measurement comes close, with built-in unbiased estimators, project workloads, and process assessment.
  • Quality management would clearly benefits from layered traceability and objective measurements, with built-in controls, non-regressive testing, and model-based validation.
  • Reuse provides another path to quality, with patterns (product), profiles (project) and development strategies (processes).
  • Finally, collaboration is to be facilitated between engineering processes targeting heterogeneous platforms, using different methodologies, across independent organizations.

That should open new perspectives to projects productivity and effectiveness.

Implementation

A shallow implementation will simply add  UML# stereotypes to core UML diagrams.

cccc
Shallow implementation: UML# stereotypes can be applied to nodes, references, connectors, and operators for composition and inheritance

Depending on tools it may be possible (e.g with No magic) to customize diagrams or even define new ones.

A deep implementation will entail the development of a two-level interpreter, one for UML# syntax, the other for the semantics.

cccc
Deep implementation: models semantics are added to a core of syntactic constructs

 

 

The Book of Fallacies

Objectives

Whereas the design side of software engineering has made significant advances since the introduction of Object Oriented approaches, thanks mainly to the Gang of Four and others proponents of design patterns, it’s difficult to see much progress on the other (and opening) side of the engineering process, namely requirements and analysis. As such imbalance creates a bottleneck that significantly hampers the potential benefits for the whole of engineering processes, our understanding of requirements should be reassessed in order to align external and internal systems descriptions;  in other words, to put under a single modeling roof business objects and processes on one hand, their system symbolic counterparts on the other hand.

Magritte’s Fallacy

Given that disproving convictions is typically easier than establishing alternative ones, it may be necessary to deal first with some fallacies that all too often clog the path to a sound assessment of system requirements. While some are no more than misunderstandings caused by ambiguous terms, others are deeply rooted in misconceptions sometimes entrenched behind walled dogmas.

Hence, to begin with a tabula rasa, some kind of negative theology is required.

#1: Facts are not what they used to be

Facts are not given but observed, which necessarily entails some observer, set on task if not with vested interests, and some apparatus, natural or made on purpose.  And if they are to be recorded, even “pure” facts observed through the naked eyes of innocent children will have to be translated into some symbolic representation. Nowadays that point is taking center stage due to the onslaught of big data (see Data Mining & Requirements Analysis).

#2: Truth and Objectivity are not to be found in Models

The mother of all fallacies is to think that models can describe some real-world truth. Even after Karl Popper has put such a fallacy to rest in sciences more than half a century ago, quite a few persist, looking for science in requirements or putting their hope in abstraction. Those stances cannot hold water because models necessarily reflect business and organizational concerns, expressed at a given time, and set within specific contexts. Negative theology provides an antidote: if a model cannot be proven as true, at least it should be possible to check that it cannot be falsified, i.e is consistent with contexts and concerns.

#3: Requirements are not meant to be “Engineered”

Taking for granted that all requirements can be directly “engineered” is to overlook the role of architectures between stakeholders and users expectations on one side, systems capabilities on the other side. Such assumption is to blur the respective responsibilities of business and system analysts, and induces the latter to second-guess the former. What may be a practical shortcut for standalone applications becomes a major risk factor when robust and stable architecture capabilities are to support constant adaptations to changing business needs.

#4: Objects can be found everywhere

Object Oriented approaches are meant to deal with the design of software components, not with business objects and organization. While it may be useful to look at business contexts from an OO perspective, there is no reason to assume that business objects and processes can be analyzed using the semantics of software design: hope is no substitute for methodology.

#5: “Natural” languages can be applied to every domain

Except for plain applications (calling for trivial solutions), significant business domains rely on specific and often very formal languages that will have to be used to express requirements. That may be illustrated with examples from avionics to finance, not to mention law. When necessary, modeling languages are to provide a bridge between specific (domains) of and generic (software) descriptions.

#6: Business concerns are “Conceptual”

Whatever the meaning of the adjective “conceptual”, it doesn’t fit to business concerns. Hence, trying to bring requirements to some conceptual level is a one way ticket to misunderstandings. Business concerns are very concrete indeed, rooted in the “here” and “now”  of competitive environments, and so must be the requirements of supporting systems. Abstract (aka conceptual) descriptions will be introduced at a later stage in order to define the symbolic representations and consolidate them as software components.

#7: Model is Code

If models were substitutes for code, or vice versa, that would make software engineering (and engineers) redundant. Surprisingly, the illusion that the information contained in models is the same as the one contained in programs (and vice versa) has sometimes wrongly taken from the Model Driven Engineering paradigm, despite a rationale going the opposite way, namely toward a layered perspective with models standing for abstractions of systems and programs.

The same fallacy is also behind the confusion between modeling and programming languages. That distinction is not arbitrary or based on languages peculiarities, but it fulfills a well-defined purpose: programming languages are meant to deal with software abstractions, while modeling languages take the broader perspective of systems.

#8: “Pie-in-the-sky” Meta-models

As any model, a meta-model is meant to map a concern with a context. But while models are concerned with the representation of business contexts, the purpose of meta-models is the processing of other models. Missing this distinction usually triggers a “flight for abstraction” and begets models void of any anchor to business relevancy. That may happen, for example, when looking for a meta-model unifying prescriptive and descriptive models; having very different aims, they belong to different realms and can never be joined by abstraction, but only by design.

#9: Problem/Solution Spaces

System engineering cannot be reduced to a simplistic dichotomy of problem and solution as it must solve three very different kinds of problems, with their respective contexts, stakeholders, and life-cycles:

  • Business ones, e.g how to assess insurance premiums or compute missile trajectory.
  • Functional ones, how the system under consideration should help solving business problems.
  • Operational ones, i.e how the system may achieve targeted functionalities for different users, within distributed locations, under economic constraints on performances and resources.

As it happens, and not by chance, those layers are congruent with modeling ones on one hand, architectural ones on the other hand.

#10: Enterprise Architecture is equivalent to Systems

Enterprise architecture is often confused with IT systems, which induces misguided understandings of business architecture. The key confusion here is between architectures, supposedly stable and shared, and processes, which are meant to change and adapt to competitive environments. But managing the dynamic alignment of assets (architecture capabilities) and supported business processes is at the core of enterprise architecture.

Further Reading