When push comes to shove, deciding on a development process is to decide between instant or delayed returns, namely focusing on users needs with agile development, or taking extended features into consideration and weighting the benefits of reuse against additional costs, e.g.:
Designs to be reused as patterns.
Structuring business process models so that they could be designed as business functions.
Formatting business logic for automated code generation.
The intricacies of stakes and decision-making processes can be set forth by applying the Observation-Orientation-Decision-Action (OODA) loop to the four views of changes: enterprise, business domains, business applications, systems:
At enterprise level the loops are triggered by changes in business environments pertaining to business model and objectives. They are supposed to affect different business domains.
Observations: Business opportunities
Orientation: Assessment of business opportunities with regard to business objectives.
Decision: Committing resources to changes in organization and processes
Action: Achieving changes in organization and processes
Changes initiated from business domains can be derived from enterprise level or the result of more specific objectives. They are supposed to affect different applications.
Observations: Business analysis
Orientation: Functional feasibility and assessment of transformation benefits.
Decision: Committing changes in functional architecture.
Action: Development, integration, tests
Changes at application level are initiated by organizational units, business or otherwise. They are supposed to be self-contained.
Observations: Users requirements
Orientation: Engineering feasibility and assessment of development options.
Decision: Choice of a development model.
Action: Development, integration, tests
Changes at system level are initiated by organizational units, including business ones (quality of service). They are supposed to affect different applications.
Observations: Process mining and operational requirements
Orientation: Operational feasibility and assessment of configurations.
Decision: Development model.
Action: Deployment and acceptance.
Given that sizeable companies with differentiated organization and business have to manage these different threads continuously and consistently, old fashioned imperative processes can only lead to paralysis. Hence the need of a declarative approach to EA workflows.
As far as enterprise architecture is concerned, the issue of scale is fogged by two confusions: one between processes and structures, the other between space and time. That square is at the core of the discipline.
The Matter of Time
Even before the digital unfolding of environments, everybody was to agree that business is all about timing; and yet, that critical dimension remains a side issue of most enterprise architecture frameworks, which consequently fail to deal with enterprises ability to change and adapt in competitive environments.
With regard to time, the business perspective is said to be synchronic because it must continuously tally with environments constraints, opportunities, and risks.
By contrast, the engineering perspective is said to be diachronic because once fastened to requirements, developments are supposed to proceed according their own time-span.
For enterprise architects, pairing up business and engineering momentum may look like a Fourier transform that would decompose enterprise architecture into piecemeal capabilities to be adjusted to the flow of business circumstances. But assets being by nature discrete, changes are not easily ironed out and some mechanism is necessary to align business and engineering time-frames, the former set at enterprise level and used to align enterprise architecture capabilities with business objectives, the latter set at system level and used to manage developments.
Agile methodologies solve the problem by assuming continuous deliveries disconnected from external schedules and by folding projects into detached time warps. Along with debatable scaling attempts, definitively non agile procedures are used to carry on with agile projects at system level.
As it happens, the iterative model can be upgraded to architecture level, enabling the linking of business driven changes to systems based ones without breaking agile principles:
Projects’ scope, objectives, and invariants are set with regard to enterprise architecture capabilities.
Iterations combine requirements analysis, development, and acceptance.
Increments and deliverables are defined dynamically contingent on scope and invariants.
Exit conditions (aka deliveries) are defined with regard to quality of services and technical requirements.
So-called architecture backlogs could thus be added to coordinate self-contained developments, standalone applications as well as system business functions, e.g. (invariants are in grey):
But the coordination issue remains between architecture backlogs, and adding procedures or committees shouldn’t be an option as it would seriously curb enterprise agility. By contrast, model based solutions are to ensure a constant and consistent adaptation of enterprise architectures to their environment.
The way tests are designed and executed is being doubly affected by development methods and AI technologies. On one hand well-founded approaches (e.g test-driven development) are often confined to faith-based niches; on the other hand automated schemes and agile methods push many testers out of their comfort zone.
Resetting the issue within a knowledge-based enterprise architecture would pave the way for sound methods and could open new doors for their users.
Tests can be understood in terms of preventive and predictive purposes, the former with regard to actual products, the latter with regard to their forthcoming employ. On that account policies are to distinguish between:
Test plans, to be derived from requirements.
Test cases, to be collected from environments.
Test execution, to be run and monitored in simulated environments.
The objective is to cross these pursuits with knowledge architecture layers.
Test plans are derived from business requirements, either on their own at process level (e.g as users stories or activities), or combined with functional requirements at application level (e.g as use cases). Both plans describe sequences of actions meant to be performed by organizational entities identified at enterprise level. Circumstances are then specified with regard to quality of service and technical requirements.
Test cases’ backbones are built from business scenarii fleshed out with instances mimicking identified entities from business environment, and hypothetical decisions taken by entitled users. The generation of actual instances and decisions could be automated depending on the thoroughness and consistency of business requirements.
To be actually tested, business scenarii have to be embedded into functional ones, yet the distinction must be maintained between what pertains to business logic and what pertains to the part played by supporting systems.
By contrast, despite being built from functional scenarii, integration and acceptance ones are meant to be blind to business or functional contents, and cases can therefore be generated independently.
Unit and components tests are the building blocks of all test cases, the former rooted in business requirements, the latter in functional ones. As such they can be used to a built-in integration of tests and development.
TDD in the loop
Whatever its merits for phased projects, the development V-model suffers from a structural bias because flaws rooted in requirements, arguably the most damaging, tend to be diagnosed after designs are encoded. Test driven development (TDD) takes a somewhat opposite approach as code specifications are governed by testability. But reversing priorities may also introduce reverse issues: where the V-model’s verification and validation come too late and too wide, TDD’s may come hasty and blinkered, with local issues masking global ones. Applying the OODA (Observation, Orientation, Decision, Action) loop to test cases offers a way out of the dilemma:
Observation (West): test and assessment for component, integration, and acceptance test cases.
Orientation (North): assessment in the broader context of requirements space (business, functional, Quality of Service), or in the local context of application (East).
Decision (East): confirm or adjust the development paths with regard to functional scenarii, development backlog, or integration constraints.
Action (South): develop code at unit, component, or process levels.
As each station is meant to deal with business, functional, and operational test cases, the challenge is to ensure a seamless integration and reuse across iterations and layers.
managing Tests cases
Whatever the method, tests plans are meant to mirror requirements scope and structure. For architecture oriented projects, tests should be directly aligned with the targeted capabilities of architecture layers:
For business driven projects, test plans should be set along business scenarii, with development units and associated test cases defined with regard to activities. When use cases, which cover the subset of activities supported by systems, are introduced upfront for both business and functional requirements, test plans should keep the distinction between business and functional requirements.
All things considered, test cases are to be comprehensively and consistently run against requirements distinct in goals (business vs architecture), layers (business, functions, platforms), or formalism (text, stories, use cases, …).
In contrast, test cases are by nature homogeneous as made of instances of objects, events, and behaviors; ontologies can therefore be used to define and manage these instances directly from models. The example below make use of instances for types (propulsion, body), car model (Renault Clio), and car (58642).
The primary and direct benefit of representing test cases as instances in ontologies is to ensures a seamless integration and reuse of development, integration, and acceptance test cases independently of requirements context.
But the ontological approach have broader and deeper consequences: by defining test cases as instances in line with environment data, it opens the door to their enrichment through deep-learning.
knowledgeable test cases
Names may vary but tests are meant to serve a two-facet objective: on one hand to verify the intrinsic qualities of artefacts as they are, independently of context and usage; on the other hand to validate their features with regard to extrinsic circumstances, present or in a foreseeable future.
That duality has logical implications for test cases:
The verification of intrinsic properties can be circumscribed and therefore by carried out based on available information, e.g: design, programing language syntax and semantics, systems configurations, etc.
The validation of functional features and behaviors is by nature open-ended with regard to scope and time-frame; it ensues that test cases have to rely on incomplete or uncertain information.
Without practical applications that distinction has been of little consequence, until now: while the digital transformation removes the barriers between test cases and environment data, the spreading of machine learning technologies multiplies the possibilities of exchanges.
Along the traditional approach, test cases relies on three basic sources of information:
Syntax and semantics of programing languages are used to check software components (a)
Logical and functional models (including patterns) are used to check applications designs (b).
Requirements are used to check applications compliance (c).
With barriers removed, test cases as instances can be directly aligned with environment data, opening doors to their enrichment, e.g:
Random data samples can be mined from environments and used to deal with human instinctive or erratic behaviors. By nature knee jerks or latent behavior cannot be checked with reasoned test cases, yet they neither occur in a void but within known operational of functional or circumstances; data analytics can be used to identify these quirks (d).
Systems being designed artifacts, components are meant to tally with models for structures as well as behaviors. Crossing operational data with design models will help to refine and hone integration and acceptance test cases (e).
Whereas integration tests put the focus on models and code, acceptance tests also involve the mapping of models to business and organizational concepts. As a corollary, test cases are to rely on a broader range of knowledge: external regulations, mined from environments, or embedded in organization through individual and collective skills (f).
Given the immersion of enterprises in digital environments, and assuming representing test cases as ontological instances, these are already practical opportunities. But the real benefits of knowledge based test cases are to come from leveraging machine learning technologies across enterprise and knowledge architectures.
As every artifact, models can be defined by nature and function. With regard to nature, models are symbolic representations, descriptive (categories of actual instances) or prescriptive (blueprints of artifacts). With regard to function, models can be likened to currency, as they serve as means of exchange, instruments of measure, or repository.
Along that understanding, models can be neatly characterized by their intent:
No use of models, direct exchange (barter) can be achieved between business analysts and software engineers.
Models are needed as medium supporting exchange between organizational units with different business or technical concerns.
Models are used to assess contents with regard to size, complexity, quality, …
Models are kept and maintained for subsequent use or reuse.
Depending on organizations, providers and customers could then be identified, as well as modeling languages.
As it’s the case of every measurement, software engineering metrics must be defined by clear targets and purposes, and using them shouldn’t affect neither of them.
On that account, a clear distinction should be maintain between business value (set independently of supporting systems), the size and complexity of functionalities, and the work effort needed for their development. As far as systems are concerned, the Function Points approach can be defined with regard to the nature of requirements (business or system), and their scope (primary for artifact, adjustment for architecture):
Measures of business requirements are based on intrinsic domain complexity (domains function points, or DFP), adjusted for activities (adjustment function point, or AFP); they are set at artifact level independently of operational constraints or supporting systems.
Business requirements metrics are added up and adjusted for operational constraints.
Functional requirements measures target the subset of business requirements meant to be supported by systems. As such they are best defined at use case level (use case function points (UCFP).
Metrics for quality of service may be specific to functionalities or contingent on architectures and operational constraints.
Whatever the difficulties of implementation, function points remain the only principled approach to software and systems assessment, and consequently to reliable engineering costs/benefits analysis and planning.
The Agile development model should not be seen as a panacea or identified with specific methodologies. Instead it should be understood as a default option to be applied whenever phased solutions can be factored out.
Alternative: When conditions cannot be met, i.e when phased solutions are required, model-based system engineering frameworks should be used to integrate business-driven projects (agile) with architecture oriented ones (phased).