Agile vs Waterfall: Right vs Left Brain ?

Preamble

All too often when the agile project model is discussed, the debate turns into a religious war with waterfall as the villain. But asking which project model will bring salvation will only bring attrition, because the question is not which is the best but when it is the best. It’s like asking if a hammer is a better tool than a sickle !

CyclisteBalance
B2C: Balancing Brain

Instead, one should first try to understand how they stand apart, and deduce from that what they are best for; the comparison between the left and right sides of the brain may provide a good starting point.

B2C: Balancing Brain Capacities

If it is (still) impossible to know what people think, it is possible to know where their thinking is rooted in brains, and the answer is unequivocal:

  • The left side of the brain is analytical; faced with a problem, it looks for parts and process them in sequence.
  • The right side of the brain is better at synthesis as it looks at the whole and processes all relevant information simultaneously.

Obviously casts will differ between individuals depending on inborn qualities and developed preferences; moreover, each individual will balance his brain sides according to the task at hand. The same should apply when projects must decide between iterative and procedural approaches.

What a Hand can Hold

When project management is first considered, the Whole vs Parts alternative should be the discriminating factor: since human brains cannot process an unlimited number of elements simultaneously, work units to be handled by teams must be clearly circumscribed, with a number of independent functional units not exceeding a dozen.

That could be a pitfall for agile developments if iterations and increments were to be associated with an exponential growth of complexity. Yet, partitioning a large project into sub-tasks doesn’t necessarily call for a waterfall schema if the sub-tasks can be performed independently.

What the Hand is Told

Sequential processing can be dumb because the intelligence can already be etched in the sequences. That’s not the case if relevant information is to be picked out and processed as a whole; that can only be done with a clear purpose guiding the hand.

Replacing an administrative process by a collaborative one entails some kind of shared ownership, with teams granted full responsibility for decisions, schedules, and quality. Otherwise the different concerns, purposes or authorities, possibly but not necessary at odds, should be set apart as sub-tasks, and milestones  introduced for their consolidation.

What is Handed Over

Development projects may handle three kind of artifacts: texts, models, and code, the first and last being mandatory, the second being optional. Since texts and code are best processed sequentially they are handed over to brain left side; conversely, models are meant to combine different perspectives, e.g structures and behaviors, which put them on the right side of the brain.

Curiously, that seems to put agile in some kind of conundrum: despite models being the symbolic representation best suited to holistic processing, agile approaches are partial to code, even if models are not explicitly ruled out. As a matter of fact, agile tenets are more partial to products than to code, and what is handed over and tested against requirements is not meant to be a program but a running application.

Hand in Hand

Just like the two parts of the brain bring their best to shared concerns and purpose, agile and phased schemes should be enlisted according to their respective merits and shortcomings:

  • Agile is clearly a better option when shared ownership can be secured and milestones and models are not needed.
  • Phased solutions (“waterfall” is a red-herring) are necessary when organizational, functional or technical dependencies between projects mean that some consolidation cannot be avoided between development process.

Assuming agile methods are used whenever possible, models should provide the glue when external dependencies are to be taken into account:

  • Organizational dependencies are managed across model layers: business requirements govern system functionalities which govern platforms implementations.
  • Functional dependencies are  managed across architecture tiers: transient non shared components (aka boundaries) are governed by transient shared components (aka controls) which are governed by persistent shared components (aka entities).
  • Development dependencies should not cross projects limits as they should be managed at domain level using inheritance or delegation.

Further Reading

External Links

Models in Driving Seats

Objective

Model driven software engineering (aka MDA, aka MBSE, aka MDE) has had a great future for quite some time, yet there isn’t much consensus about what that could be and, in particular, about what kind of models should be in the driving seat.

Shadowing Reality

Pending some agreement about model contents (e.g specific ?) and capabilities (e.g executable ?), the driving of software engineering processes will probably remain more practices than principles.

Shadowing Reality

Models are shadows of reality, with their form and contents set by contexts and concerns. They can be characterized by their purpose and capabilities.

Regarding purpose, models fall in two groups: descriptive models deal with problems at hand, prescriptive models with solutions.

  • Descriptive models are partial and biased representations of actual contexts. Partial because they only deal with relevant objects, activities and features; biased because the selection is made on purpose.
  • Prescriptive models are complete descriptions of artifacts.

Regarding capabilities the distinction is between intensional and extensional languages:

  • Extensional (aka denotative) languages deal with sets of identified instances of objects and activities. As they condone partial or ambiguous statements, they are best used for descriptive models.
  • Intensional (aka connotative) languages deal with the semantics and constraints of symbolic representations. Due to their formal capabilities they are best used for prescriptive models.

Along that reasoning  System Models can be characterized along architecture contexts: business processes (enterprise), functionalities (systems), and platform implementations (technologies):

  • Business models are descriptive and built with extensional languages (business is often said to bloom on discrepancies). They are necessarily partial as they target specific contexts and concerns.
  • Functional (aka analysis) models are prescriptive and built with intensional languages as they must specify the semantics and constraints of symbolic representations. Yet they are not necessarily complete since they don’t have to cover every details of business processes or implementations (cf traceability).
  • Implementation models (aka design) are prescriptive with no use for extensional capabilities since the relevant physical objects, i.e extensions, are system components directly derived from system specifications. However, they must support complete and formal descriptions of component features.
Business object, analysis and design, implementation.

Models don’t Talk Alone

Models are built with logographic languages that should not be confused with phonetic ones: whereas the latter convey information sequentially, the former build semantics from different sources; that enables models to be read from different perspectives. Contrary to programs whose semantic is bound to a fixed sequential execution, models don’t talk alone, but must be chatted to. Even when their readings are sequential, the sequences are governed by readers, not by models.

That point is pivotal if model transformation, arguably a pillar of model driven development, is to be supported along different perspectives and according different concerns.

Besides, it must be noted that while models can be fully translated into (and reversed from) sequential representations (e.g with XMI), transcripts are just projections and should not be confused with models as such.

Models don’t Walk Alone

Like talks, walks are sequential as they advance step by step. Hence, while some models can be walked (aka executed), such walks are only instances of behaviors supported by models.

That should be especially clear for system models which describe architectures combining agents and devices as well as software components, contrary to programs which are limited to software components structure and sequential (or parallel) execution.

Just like XMI transcripts should not be confused with original models, “executable” models should not be confused with fully fledged ones because they simulate the behaviors of agents and devices as if they were software components. While that may be useful for models targeting software components within a given architecture,  ignoring the specificities of software, agents, and devices would be pointless when the objective is to design a system architecture.

Abstraction Levels

Defining models as abstractions may be correct but not very helpful when deciding what kind of abstraction should drive development processes. First, the question is to define how concerns and purposes should be used to define abstractions, in other words to set apart significant from non-significant information. Then, in order to avoid flights for always higher abstractions without burying models into ground details, some principles are needed regarding specialization and generalization.

When systems are considered, abstraction levels are set by enterprise organization, systems functionalities, and platform technologies, with concerns expressed by business, functional, or technical requirements. Given a hierarchy of concerns, models must be set at the proper level of abstraction: below that level part of information will be redundant or irrelevant; above, useful features and classifications may be overlooked as some information will be either wanting or will not discriminate between variants.

Such levels can be identified by selectively applying specialization and generalization to constrained hierarchies:

  1. Inheritance should not cross model layers: hierarchies of business, functional and technical artifacts must be kept separate.
  2. Structures and behaviors pertain to different concerns: abstractions of objects and aspects should be managed independently.
  3. Specialization should be applied when subset of identified objects or behavior are associated with specific features. Generalization should be introduced when models must be consolidated.

Such an approach brings significant benefits for reuse, arguably one of the main objective of abstraction. And that appears clearly when developments are governed by models and architecture components and mechanisms are to be shared across model layers.

Reuse of business and functional models along functional tiers, with abstractions for structures (green) and aspects (orange).

Driving Models and Roadmaps

Systems engineering has to meet three kinds of requirements: business needs, system functionalities, and components implementation. In a perfect world there would be one stakeholder, one architecture, and one time-scale. Unfortunately, projects may involve different stakeholders, target different architectures, and be set along different time-frames. In that case milestones and roadmaps are to be introduced in order to bring all expectations and commitments under a common roof, with models providing the glue between them. That may be achieved with models defined along MDA layers:

  • Computation Independent Models (CIMs) describe business objects and processes independently of supporting systems.
  • Platform Independent Models (PIMs) describe how business processes are supported by systems seen as functional black boxes, i.e disregarding the constraints associated to candidate technologies.
  • Platform Specific Models (PSMs) describe system components as implemented by specific technologies.
From Models to Roadmaps

Interestingly, both extremities of development processes are textual descriptions, informal at the beginning, formal at the end, with models providing bridges in between. As noted above, those bridges are not always necessary: texts can be directly translated into instructions (as illustrated by voice command devices), or project teams with shared ownership can develop programs according users’ requirements (as promoted by Agile methods). Yet, the question should always be asked, and when textual requirements cannot be directly developed into programs the first task should be to draw the shortest modeling path.

Roadmaps and Meta-models

Model driven tools may appear under different guises yet most rely on meta-models and the Meta Object Facility (MOF). Given that meta-models are models of models, they are supposed to focus on a subset of relevant features selected on purpose, which, for driving models, would be some kind of road signs governing models transformation. What that could be? Two approaches are to be considered:

  • Language translation:  as presented by the report of the Dagstuhl Seminar, meta-models can be designed according their generic transformation capabilities and used to single out language constructs in order to transfer model contents into another language.
  • Separation of concerns: as development advances and models take into account different concerns, meta-models can be used to monitor and control the selective processing of corresponding contents. That could be achieved if transformations were governed by traffic signs singling out relevant features and waving aside the leftovers.

Each option points to a different perspective, the former targeting tools providers, the latter aiming at modellers. Whereas MDA layers (for business, functionalities, and technology) clearly point to models built with the Unified Modeling Language according organizational, functional and technical concerns, most of current implementations follow the language option; while those tools may be (theoretically) open at technical level, they usually rely on domain specific languages. By narrowing functional scopes and relying on automated translation to bridge the gaps, solutions based upon domain specific languages can only provide local solutions to fragmented problems. That road could be a bumpy one for model driven engineering.

Postscript

Thinking again,  the “MDA” moniker can be misleading as it may blur the distinction between models and their contents: given that MDA model layers effectively correspond to architecture levels, the pivotal MDA contention is that the modeling of systems is meant to be driven by enterprise architecture divides.

Further Reading

Projects As Non-Zero Sum Games

Objective

Contracts are as much about collaboration and trust as they are about protection. While projects are not necessarily governed by contracts, all entail some compact between participants. From in-house developments with shared ownership on one hand, to fixed price outsourcing on the other hand, success will depend on collaboration and a community of purpose.

arielSchles_ParTronc
The whole is worth more than its parts (Ariel Schlesinger)

Hence, a first objective should be to prevent prejudiced assumptions and defensive behaviors. Then, taking a cue from Game Theory, compacts should be designed to enhance collaboration between parties by establishing a non-zero sum playground where one’s benefits don’t have to come from others’ losses.

Players and Stakes

Basic participants of engineering projects can be put in two groups, possibly with subdivisions: business units expressing requirements, and engineering units meant to provide solutions.

IntervenantsMatrice
Who’s Who

The former (user, customers, or stakeholder) specify the needs, pay for the outcome, and expect to benefit from its use. Their success is measured by return on investment (ROI) and depends on cost, quality, and timely delivery.

The latter design the solution, develop the components, and integrate them into target environments. Narrowly defined, their success is measured by costs.

planbataille
Project participants should collaborate on features and timing.

Hence, while stakes may clearly conflict on prices, there is room for collaboration on features, quality and timing, and that may bring benefits to both customers and providers.

Communication and Collaboration: Modeling Languages

Understanding is a prerequisite to collaboration, and except for in-house developments, there is no reason to expect immediate (aka mediation free) understanding between business and engineering units. Some common language is therefore required if heterogeneous concerns are to be matched.

Catap_NZSG
Mapping functional and engineering concerns.

While in-house developments under shared ownership can proceed with domain specific languages, collaboration between different organizational units along time can only be supported by modeling languages with non specific semantics.

Assessments and Strategies: Requirements Metrics

If participants are to act rationally based on symmetric information, they must agree on some measurements for size and complexity.

Given that collaboration is not to offset interests by nature different or even conflicting, participants’ strategies must be based on shared and reliable information regarding their respective stakes. Moreover, if timing is to matter, projects’ metrics will have to be reassessed periodically, and strategies redefined accordingly.

Catap_NZSG_tmg
Matching the respective deadlines and strategies of customers and providers

Despite clear shortcomings about targets and accuracy, requirements metrics are nonetheless a necessary component of any collaborative framework, with or without explicit contracts.

First of all, some measurements are required to set schedules and plan resources. Even faulty, those estimators will serve their purpose if agreed upon by all participants, with possible biases redressed by consent, and lack of accuracy reduced or circumscribed progressively.

Then, as engineering projects often come with intrinsic uncertainties, they may go off-trail. Providing timely and reliable indicators, participants should be able to anticipate hurdles, reassess strategies, and amend agreements.

Finally, metrics should make room for arbitrage and differentiated strategies; given trustworthy information and clear acceptance criteria, participants would be in a position to weight their respective stakes with risks, rearrange their priorities, and plan their endgames appropriately.

Prices and Acceptance: Contracts

While trustworthy information is a necessary condition for cooperation, it’s not a sufficient one, as players may be tempted to cheat or defect if they think they can get over with it. Leaving out ethical considerations and invasive regulations, the best policy should be to set rewards and sanctions such as to make a convincing case for cooperation benefits.

That can be achieved if participants can opt for differentiated and non conflicting strategies set according  model layers.

Starting with business processes (aka computation independent models) and expected return on investment, customers consider system functionalities (a) and ask providers for feasibility (b), and schedules (c).

Given a functional architecture (aka platform independent models), participants may define strategies: customers associate features with date and expected benefits, providers plan resources and deliveries for products (aka platform specific models and code).

Commitments, by customers on prices and providers on dates, are made at two levels: functional architectures (d) and features (e).

NZSG_0
Business requirements (a), functional architecture (b), developments (c), architectural commitments (d), incremental developments (e).

As for any iterative process, changes must be explicitly circumscribed by invariants, e.g functional architecture of supporting systems. Yet, within those constrains, there may be room for adjustments insofar that providers’ costs and customers’ returns are contingent on schedules.

Processes: Time, Risks, Responsibilities and Milestones

As Einstein put it, “The only reason for time is so that everything doesn’t happen at once”. That seems a perfect fit to project planning, as illustrated by two archetypal project configuration:

  • At one extreme, in-house projects with shared ownership may operate within their own time wraps (aka time-boxes) because decisions by customers and providers can be taken jointly and simultaneously independently of external factors.
  • At the other extreme, outsourced projects are set along sequenced milestones and governed by different organizational units, each with its own time scale. With tasks performed independently and decisions made separately, time becomes a factor and processes are introduced to consolidate time-scales and manage risks and decisions alongside.

More generally, engineering processes are introduced when decisions are to be made along time by different organizational entities. Assuming that time is governed by changes and the decisions to be made thereof, the sequencing of tasks should be defined with regard to the nature of events and decision-making:

Technical: engineering processes come with their own intrinsic constraints regarding the sequencing of operations. When those operations can be performed independently of contexts the whole sequence can be seen as a single step wrapped into a time fold. Otherwise they must be associated with distinct process steps.

Contractual: while uncertainties about contexts may affect both expectations and commitment, decisions have to be made notwithstanding. But decisions carry risks and should therefore be factored out, with rationale and responsibilities stated explicitly. While some responsibilities may be shared, decisions about contexts should remain the sole responsibility of the participants directly in charge and appear as milestones set in contracts.

Managed: whatever the number of model layers and automated transformations, engineering processes set about and wind up with human activities. Given that resources and skills are by nature limited, participants have to make the most of their use. But once milestones are set and technical constraints identified, each participant should be able to optimize its own planning.

That makes for three basic project configurations:

  • Requirements analysis is about business requirements and feasibility. Its main objective is to identify technical and contractual milestones.
  • Use case developments are self-contained projects under shared ownership, bounded by contractual milestones. They come with defined prices and schedules whose mix can be managed by consent.
  • System functionalities are set by cross-cutting objectives and their development is therefore governed by contractual milestones and schedules.
NZSG_1
Architectures and Project Configurations

Managing engineering projects along these principles would greatly improve their integration with business needs on one hand, enterprise architecture on the other hand.

Further Reading

External Links

Requirements Capture

Objective

Requirements are not manna from heaven, they do not come to the world as models. So, what is the starting point, the primary input ?  According to John,  “In the beginning was the word …”, but Gabriel García Márquez counters that at the beginning “The world was so recent that many things lacked names, and in order to indicate them it was necessary to point. ”

 

Frog meditating on requirements capture (Sengai)

Requirements capture is the first step along project paths, when neither words nor things can be taken for granted: names may not be adequately fixed to denoted objects or phenomena, and those ones being pointed at may still be anonymous, waiting to be named.

Confronted with lumps of words, assertions and rules, requirements capture may proceed with one of two basic options: organize requirements around already known structuring objects or processes, or listen to user stories and organize requirements alongside. In both cases the objective is to spin words into identified threads (objects, processes, or stories) and weave them into a fabric with clear and non ambiguous motifs.

From Stories to Models

Requirements capture epitomizes the transition from spoken to written languages as its objective is to write down user expectations using modeling languages. Just like languages in general, such transitions can be achieved through either alphabetical of logographic writing systems, the former mapping sounds (phonemes) to signs (glyphs), the latter setting out from words and mapping them to symbols associated with archetypal meanings; and that is precisely what models are supposed to do.

kanji_rekap
Documented communication makes room for mediation

As demonstrated by Kanji, logographic writing systems can support different spoken languages providing they share some cultural background. That is more or less what is at stake with requirements capture: tapping requirements from various specific domains and transform them into functional requirements describing how systems are expected to support business processes. System functionalities being a well circumscribed and homogeneous background, a modeling framework supporting requirements capture shouldn’t be out of reach.

Getting the right stories

If requirements are meant to express actual business concerns grounded in the here and now of operations, trying to apprehend them directly as “conceptual” models would negate the rationale supporting requirements capture. User stories and use cases help to prevent such misgivings by rooting requirements in concrete business backgrounds of shared references and meanings.

kanji_whorse
Requirements capture should never flight to otherworldly expectations

Yet, since the aim of requirements is to define how system functionalities will support business processes, it would help to get the stories and cases right upfront, in other words to organize them according patterns of functionalities. Taking a cue from the Gang of Four, three basic categories should be considered:

  • Creational cases or stories deal with the structure and semantics of business objects whose integrity and consistency has to be persistently maintained independently of activities using them. They will govern objects life-cycle (create and delete operations) and identification mechanisms (fetch operations).
  • Structural cases or stories deal with the structure and semantics of transient objects whose integrity and consistency has to be maintained while in use by activities. They will govern features (read and update operations) and target aspects and activities rooted (aka identified) through primary objects or processes.
  • Behavioral cases or stories deal with the ways objects are processed.
RR_MYW
Products and Usage are two different things

Not by chance, those categories are consistent with the Object/Aspects perspective that distinguish between identities and objects life-cycle on one hand, features and facets on the other hand. They are also congruent with the persistent (non-transactional)/transient (transactional) distinction, and may also be mapped to CRUD matrices.

Since cases and stories will often combine two or three basic categories, they should be structured accordingly and reorganized as to coincide with the responsibilities on domains and projects defined by stakeholders.

User Stories vs Use Cases

Other than requirements templates, user stories and use cases are two of the preferred methods for capture requirements. Both put the focus on user experience and non formal descriptions, with use cases focusing at once on interactions between agents and systems, and user stories introducing them along the course of refinements. That make them complementary:

  • Use cases should be the method of choice when new functionalities are to be added to existing systems.
  • User stories would be more suited to standalone applications but may also be helpful to single out use cases success scenarii.

Depending on circumstances it may be easier to begin requirements capture with a success story (green lines) and its variants or with use cases (red characters) with some activities already defined.

ExecPathsStory
User Stories vs Use Cases

Combining user stories and use cases for requirement capture may also put the focus on system footprint, setting apart the activities to be supported by the system under consideration. On a broader perspective, that may help to position requirements along architecture layers: user stories arise from business processes  set within enterprise architecture, use cases are supported by functional architecture.

Spinning the Stories

Given that the aim of requirements is to define how systems will support processes execution and objects persistency, a sound policy should be to characterize those anchors meant to be targeted by requirements nouns and verbs. That may be achieved with basic parsing procedures:

  • Nouns and verbs are set apart and associated to candidates archetypes for physical or symbolic object, physical or symbolic activity, corresponding container, event, or role.
  • Among them business concerns should point to managed individuals, i.e those anchors whose instances must be consistently identified by business processes.
  • Finally business rules will be used to define features whose values are to be managed at instances level.
Spinning words into archetypes

Parsing nondescript requirements for anchors will set apart a distinctive backbone of clear and straight threads on one hand, a remainder of rough and tousled features and rules on the other hand.

Fleshing the Stories out

Archetypes are like clichés, they may support a story but cannot make it. So it goes with requirements whose meaning is to be found into the intricacy of features and business rules.

However tangled and poorly formulated, rules provide the substance of requirements as they express the primary constraints, needs and purposes. That jumble can usually be reshaped in different ways depending on perspective (business or functional requirements),  timing constraints (synchronous or asynchronous) or architectural contexts; as a corollary, the way rules are expressed will have a significant impact on the functional architecture of the system under consideration.

If transparency and traceability of functional arbitrages are to be supported, the configuration of rules has to be rationalized from requirements inception. Just like figures of speech help oral storytelling, rules archetypes may help to sort out syntax from semantics, the former tied to rules themselves, the latter derived from their targets. For instance, constraints on occurrences (#), collections (*) or partitions (2) should be expressed uniformly whatever their target: objects, activities, roles, or events.

PtrnRules_Incept
From rules syntax to requirements semantics
As a consequence, and to all intents and purposes, rules analysis should not only govern requirements capture, it should also shadow iterations of requirements analysis, each cycle circumscribed by the consolidation of anchors:
  • Single responsibility for rule implementation: project, architecture or services, users.
  • Category: whether a rule is about life-cycle, structure, or behavior.
  • Scope: whether enforcement is transient of persistent.
  • Coupling: rules triggered by, or bringing change to, contexts must be set apart.
  • Control: whether enforcement has to be monitored in real-time.
  • Power-types and extension points: all variants should be explicitly associated to a classification or a branching rule.
  • Subsidiarity: rules ought to be handled at the lowest level possible: system, domain, collection, component, feature.

Pricing the Stories

One of the primary objectives of requirements is to put a price on the system under consideration and to assess its return on investment (ROI). If that is straightforward for hardware and off-the-shelf components, things are not so easy for software developments whose metrics are often either pragmatic but specific, or  inclusive but unreliable.

Putting aside approaches based on programs size, both irrelevant for requirements assessment and discredited as development metrics, requirements can be assessed using story or function points:

  • Story points conduct pragmatic assessments of self-contained stories. They are mostly used locally by project teams to estimate their tasks and effort.
  • Functional metrics are more inclusive as based on principled assessment of archetypal system functionalities. Yet they are mostly confined to large organizations and their effectiveness and reliability highly depends on expertise.

Whereas both approaches start with user expectations regarding system support, their rationale is different: function points (FPs) apply to use cases and take into account the functionalities supported by the system; story points (SPs) apply to user stories and their scope is by definition circumscribed. That difference may be critical when categories are considered: points for behavioral, structural and creational stories should be weighted differently.

Yet, when requirements capture is supported both by stories and use cases, story and functions points can be combined to obtain functional size measurements:

  • Story points are used to assess business contents (aka application domain) based on master data (aka persistent) entities, activities, and their respective power-types.
  • Use case points target the part played by the system, based on roles and coupling constraints defined by active objects, events, and controlling processes.
  • Function Points as Use Case Points weighted by Story Points

Non adjusted function points can then be computed by weighting use case function points with the application domain function points corresponding to use case footprint.

Further Reading