Engineering is about the making of artifacts, software engineering about the making of symbolic ones. Hence the three dimensions of the discipline:
- Concepts: what is to be represented, targets (agents, objects, locations, events, activities) and status of representations.
- Artifacts: how are relevant concepts to be represented through the engineering process (architectures, requirements, models, and code).
- Processes: how to plan and manage engineering activities (milestones, projects, metrics, quality, …).
The aim of the Caminao project is to provide a comprehensive framework dedicated to systems engineering, from enterprise architecture to software design and development. On a broader perspective, the objective is to build a community of interest around a set of unambiguous and coherent concepts and principles that could be accepted independently of specific methods.
See also: Topics Guide
Taking a leaf from the Standford’s Symbolic Systems Program, the Caminao approach is based upon a very simple postulate: systems are designed to manage symbolic representations (aka surrogates) of actual (physical or notional) objects, events and behaviors. That single position provides the leverage for a whole paradigmatic shift from software engineering to enterprise architecture.
Since disproving convictions is typically easier than establishing alternative ones, a kind of negative theology may help to deal with some fallacies hampering progresses in system engineering. While some are just misunderstandings that can be corrected with unambiguous definitions, others are deeply rooted in misconceptions often entrenched behind walled dogmas.
For instance: facts are given (no, they are built from observations); models are about truth (no, they are built on purpose); models and code are equivalent (under specific conditions, which make engineering processes pointless).
If facts are not given, where to start when representations (aka models) are to be built?
As Plato would have said, wherever we look, all we can see are mental representations, latent or overt. One step further, these representations can be developed into symbolic placeholders ready for what concerns or purposes will bring to our mind.
When systems are designed to manage such placeholders, anchors to business context must be identified for the symbolic representations of objects, events, and behaviors, relevant aspects regrouped accordingly, and irrelevant features or behaviors dropped altogether. That’s called modeling.
Systems’ purpose is to manage symbolic representations. As a corollary, one will expect Knowledge Management to shadow systems architectures and concerns: business contexts and objectives, enterprise organization and operations, systems functionalities and technologies. On the other hand, knowledge being by nature a shared resource of reusable assets, its organization should support the needs of its different users independently of the origin and nature of information. Knowledge Management should therefore bind knowledge of architectures with architecture of knowledge.
Given the ubiquity of the term, from philosophy to systems engineering and enterprise architecture, ontology could arguably be understood as the mother of modeling.
As introduced long ago by philosophers, ontologies are systematic accounts of existence for whatever is considered, in other words some explicit specification of the concepts meant to make sense of a universe of discourse. From that starting point three basic observations can be made about ontologies:
- They are made of categories of things, beings, or phenomena.
- They are driven by cognitive (i.e non empirical) purposes, namely the validity and consistency of symbolic representations.
- They are meant to be directed at specific domains of concerns, whatever they may be.
With regard to models, only the second one puts ontologies apart: contrary to models, ontologies are about understanding and are not supposed to be driven by empirical purposes.
As long as information was just data files and systems just computers, the respective roles of enterprise and IT architectures could be overlooked; but that has become a hazardous neglect with distributed systems pervading every corner of enterprises, monitoring every motion, and sending axons and dendrites all around to colonize environments.
Yet, the overlapping of enterprise and systems footprints shouldn’t generate confusion. When the divide between business and technology concerns was clearly set, casual governance was of no consequence; now that architecture capabilities are defined at different levels, turfs are more and more entwined, a dedicated policies are required lest decisions be blurred and entropy escalates as a consequence.
That objective can be best achieved through a clear distinction between maps (architects) and territories (managers).
Systems are designed to manage symbolic representations. Those representations (aka surrogates) are objects on their own and the objective of system modelling is to design useful and effective symbolic objects that will stand for their business counterparts.
If they are to be processed, those symbolic representations must be physically implemented. The specific aim of architecture driven modelling is to identify the constraints to be supported whatever the technology used to store the representations, clay tablets, codices, optical discs, or holograms.
Models are first and foremost communication media; as far as system engineering is concerned, they are meant to support understanding between participants, from requirements to deployment. And because participants are collective entities from various professional backgrounds, models must be written down using agreed upon languages.
Those languages are made of well-formed constructs (syntax) associated with agreed meanings (semantics). While correct syntax can be easily checked, that’s not the case for semantics as interpretations can easily flourish depending on methods, businesses, or engineering concerns.
As the world turns digital, the divides between social lives, corporate businesses, and physical realities are progressively dissolved. That calls for some unified modeling framework that would supplement OMG’s unified modeling language (UML).
That could be achieved through the consolidation of the concepts used to describe agents with symbolic processing capabilities independently of the modeling perspective.
Models are shadows of reality characterized by contexts and concerns, capabilities and purposes.
Regarding capabilities the distinction is between extensional (aka denotative) and intensional (aka connotative) languages, the former used to describe sets of actual instances, the latter used for artifacts design.
Regarding purposes, models fall in two groups: descriptive models deal with problems at hand (e.g requirements capture), prescriptive models with solutions (e.g architectures). Differentiated purposes are best illustrated by OMG’s Model Driven Architecture (MDA).
Descriptive as well as prescriptive models occupy the driving seats when communication between organizational units with independent decision-making has to be anchored to milestones. When cross dependencies (business, organizational, or technical) can be eliminated agile development processes usually put models on back seats.
Assuming that intelligence is best defined as the ability to solve problems, the first criterion to consider is the type of input (aka knowledge) to be used:
- Explicit: rational processing of symbolic representations of contexts, concerns, objectives, and policies.
- Implicit: intuitive processing of factual (non symbolic) observations of objects and phenomena.
That distinction is broadly consistent with the one between humans, seen as the sole symbolic species with the ability to reason about explicit knowledge, and other animal species which, despite being limited to the processing of implicit knowledge, may be far better at it than humans. As spectacularly epitomized by AlphaGo, the distinction also coincides with the learning capabilities of humans and machines.
Software Engineering is about the making of symbolic artifacts used as surrogates for actual business objects, events, and activities. Surprisingly, that critical distinction is generally overlooked by modeling languages, as if it was a matter of preference. That primary flaw can undermine the whole of models and consequently subvert model based engineering process.
Architectures are about continuity in space and time as their capabilities are meant to support activities which, by nature, must be adaptable to changing concerns and objectives:
- Enterprise architecture deals with assets and organization supporting corporate identity and business capabilities within regulatory and market environments.
- Functional architecture describe with systems functionalities supporting enterprise architecture.
- Technical architecture deals with the feasibility, efficiency and economics of systems operations.
With regard to enterprises, architectures must be seen as assets whose value depends on their ability to support processes.
As illustrated by Borges famous sample, taxonomies are symbolic tools that may serve a wide range of purposes, some practical, some speculative, some mythical. Faced with requirements capture, the purpose should be to clarify what is at stake and to identify the nature of subsequent engineering problems to be solved. Hence the search for a comprehensive, reasoned, and unequivocal taxonomy of requirements:
- Requirements dealing with symbolic contents (aka Functional Requirements). Given that symbolic expressions can be reformulated, the granularity of these requirements can always be adjusted as to fall into single domains and therefore under the authority of a unique owner or stakeholder.
- Requirements not dealing with symbolic contents (aka Non Functional Requirements). Since they cannot be uniquely tied to symbolic flows between systems and domains, nothing can be assumed regarding the number of stakeholders. Yet, as they target systems features and behaviors, they can be associated with architecture levels: enterprise, functional, technical.
Whatever the methodology, the objective is to map new requirements to existing architectures capabilities. The first task is therefore to pick the artifacts to be reused and to define the ones that will have to be developed. For that purpose requirements are to be ascribed to archetypal symbolic constructs, e.g: domain or use case, actual or symbolic, partition or sub-type, object or feature, aggregate or composite, inheritance or delegation, static or dynamic rule.
Use cases can be seen as the ecumenical facade of UML, so much that it can also help with agile projects.
As initially defined by Ivar Jacobson, the aim of use cases is to describe what happens between users and systems. Strictly speaking, it means a primary actor (possibly seconded) and a triggering event (possibly qualified); that may (e.g transaction) or may not (e.g batch) be followed by some exchange of messages. That header may (will) not be the full story, but it provides a clear, simple and robust basis:
- Clear and simple: while conditions can be added and interactions detailed, they are not necessary parts of the core definition and can be left aside without impairing it.
- Robust: the validity of the core definition is not contingent on further conditions and refinements. It ensues that simple and solid use cases can be defined and endorsed very soon, independently of their further extension.
Use cases can also be layered as to set apart stable long-term business objectives from short-term opportunistic users’ stories or use cases.
Modeling methods, with or without UML, have some difficulties to agree about scope (e.g requirements, analysis, and design), or concepts (e.g objects, aspects, and domains). Hence the benefits to be expected from a comprehensive and consistent approach based upon two basic distinctions:
- Business vs System: assuming that systems are designed to manage symbolic representations of business objects and processes, models should keep the distinction between business and system objects descriptions.
- Identity vs behavior: while business objects and their system counterparts must be identified uniformly, that’s not the case for the symbolic representation of aspects, which can be specified independently.
That two-pronged approach will bridge the gap between analysis and design models, bringing about a unified perspective for concepts (objects and aspects) as well as scope (business objects and system counterparts).
The adoption 20 years ago of UML (Unified Modeling Language) as a modeling standard could have spurred a new start of innovation for software engineering. Whatever the reasons, initial successes have not been transformed and UML utilization remains very limited, both in breadth (projects developed) and depth (features effectively used).
Taking a cue from Ivar Jacobson (“The road ahead for UML“), some modularity should be introduced in order to facilitate the use of UML in different contexts, organizational, methodological, or operational. Three main overlapping objectives should be taken into consideration:
- Complexity levels: language features should be regrouped into clearly defined subsets corresponding to levels of description. That would open the way to leaner and fitter models.
- Model layers: language constructs should be re-organized along MDA layers if models are to map stakeholder concerns regarding business requirements, system functionalities, and system design.
- Specificity: principled guidelines are needed for stereotypes and profiles in order to keep the distinction between specific contents and “unified” ones, the former set with limited scope and visibility, the latter meant to be used across model layers and/or organizational units.
As it happens, a subset of constructs centered on functional architecture may well meet those three objectives as it will separate supporting structures (“charpentes” in french) from features whose specifications have no consequences on system architectures.
Whatever the assorted labels (based/driven, system/software, development/engineering), the basic tenet of model based system engineering (MBSE) is to replace procedural approaches by declarative ones, and to define engineering processes in terms of artifacts transformation.
OMG’s Model Driven Architecture (MDA) is a special case of MBSE set along representative models for computation independent (CIM), platform independent (PIM), and platform specific (PSM) models.
These layers are to provide the backbone of enterprise architecture.
Assuming that models are meant to depict sets of actual instances (descriptive models) or to specify how to create new ones (prescriptive models), one would expect meta-models to do the same for sets of modeling artifacts seen as some kind of “meta” instances. Based on that understanding, it would be possible to define meta-models with regard to models semantics and abstraction scales:
- Models of business environments describe the relevant features of selected objects and behaviors, including supporting systems. Such models are said extensional as they target subsets in actual contexts.
- Models of systems artifacts specify the hardware and software components of supporting systems. Such models are said intensional as they prescribe how new artifacts are to be built.
Combined with MBSE, meta-models provide a conceptual roof to be shared between models semantic (descriptive, prescriptive, or mixed) and enterprise architecture purposes (governance or engineering).
Broadly speaking, software engineering can be defined as the processing of symbolic artifacts by enrichment, extension, interpretation, translation, transformation, generation, etc. That may be performed at two levels, one targeting languages constructs, the other dealing with model contents. While artifacts semantics can be managed uniformly at language level, that’s not the case when model contents are processed across contexts, which entails the reuse of differentiated knowledge: business, organizational, or technical.
Retrieving legacy code has something to do with archaeology as both try to retrieve undocumented artifacts and understand their initial context and purpose. The fact that legacy code is still well alive and kicking may help to chart their structures and behaviors, but it may also confuse the rationale of initial designs.
Hence the importance of traceability and the benefits of a knowledge based approach to modernization organized along architecture layers (enterprise, systems, platforms), and processes (business, engineering, supporting services).
To begin with, enterprise architecture is best understood in terms of territories and maps, the former materialized by the realities of enterprise organization and business operations, the latter by the assortment of charts, blueprints, or models used to describe enterprise organization, business processes, IT systems, and software applications.
Instead of coercing external constructs and concerns on customary structures and practices, enterprise organization and systems (maps) could be progressively adjusted to actual territories and business opportunities. And taking a leaf from geographers, the maps supporting EA could be layered according to assets, concerns, and responsibilities, or whatever criteria befitting existing and planned organization.
The main purpose of engineering processes is to balance stakeholders objectives set along different time-frames: business opportunities, systems architectures, and applications development. In a perfect world there would be one stakeholder, one architecture, and one time-scale. Unfortunately, goals are often set by different organizational units, based upon different concerns and rationales, and subject to changes along different time-frames.
The rationale governing the design of development processes may be compared to the one governing cooking. While there is an infinity of ingredients, most belong to four categories: water, proteins, carbohydrates, and fats. The ways they react to physical processing and can be combined are set by chemical laws. It is therefore possible to characterize cooking recipes and identify basic states and processing sequences independently of their gourmet flavors. With servings triggered by customers orders, that epitomizes model driven engineering: reasoned processing of models depending on the fundamental properties of their contents and the expectations of end users.
Depending on organizational contexts and cross dependencies, two policies are to be considered: iterative and incremental if agile conditions (shared ownership and continuous delivery) can be met, phased otherwise.
Models are representations and as such they are necessarily set in perspective and marked out by concerns. As such they can be used as milestones by processes marking contexts (symbolic, mechanic, and human components), information systems (functional components), or software components (platform implementation).
With regard to concerns, models will take into account responsibilities (enterprise architecture), functionalities (functional architecture), and operations (technical architecture).
While it may be a sensible aim, perspectives and concerns are not necessarily congruent as responsibilities or functionalities may cross perspectives (e.g support units), and perspectives may mix concerns (e.g legacies and migrations). That conundrum may be resolved by a clear distinction between descriptive and prescriptive models, the former dealing with the problem at hand, the latter with the corresponding solutions, respectively for business, system functionalities, and system implementation.
Assuming that everything and everybody outside a project will hold its breath between inception and completion is arguably a risky bet; more probably the path from requirements to deployment is bound to be affected by changes in business or technical contexts, not to speak about what may happen within the project itself. Such hazards are compounded by scale, as large undertakings entails the collaboration of several teams during significant periods of time. Hence the need of milestones, where expectations are matched to commitments.
The Agile development model as pioneered by the eponymous Manifesto is based both on universal principles meant to be applied in any circumstances, and on more specific ones subject to some prerequisites.
As it happens, only two principles (shared ownership and continuous delivery) out of twelve cannot be applied universally as they may conflict with cross dependencies with regard to business objectives, organizational responsibilities, or technical constraints. In other words ten out of twelve of the agile principles could bring benefits to all and every type of project.
All too often when the agile project model is discussed, the debate turns into a religious war with waterfall as the villain. But asking which project model will bring salvation will only bring attrition, because the question is not which is the best but when it is the best.
Basically, Agile’s progressive exploration of problem spaces and solution paths solves two critical flaws of traditional Waterfall approaches, namely:
- Fixed requirements set upfront: since there is an inverse relationship between the level of details and the reliability and stability of requirements, staking the whole project on requirements fully defined at such an early time is arguably a very hazardous policy.
- Quality as an afterthought: given that finding defects is not very gratifying when undertaken in isolation, delegating the task will offer few guarantees if not associated with rewards commensurate to findings; moreover, quality as a detached concern may easily turn into a collateral damage when set along mounting costs and scheduling constraints. Alternatively, quality checks may change into a more positive endeavor when conducted as an intrinsic part of development.
On that basis, Agile should be the solution of choice when shared ownership can be achieved and external dependencies limited. When organizational or technical divides cannot be avoided Agile with models can provide an effective compromise.
There is no such things as “statistical facts”. Statistics are made on purpose, usually to support conjectural arguments or counter questionable ones. Hence, considering statistics per se is like counting fingers when the hand points at the moon.
As far as software engineering is concerned, three main rationales are to be considered: predictive (project planning), preventive (risks management), or corrective (process assessment). While respective estimators may overlap, they may also interfere, one often cited example being code size estimators used to assess productivity.
Functional size measurement is the corner-stone of software economics, from portfolio management to project planning, bench-marking, or ROI assessment. Given that software has no physical features, relevant metrics can only be derived from the functional value of the software under consideration, in other words their functional requirements.
Revisiting Function Points, the objective is to estimate the functionalities supported by a system independently of the technology and tools used to implement it. Those functionalities can only be set in their business and operational contexts, and must be assessed accordingly. Yet, functional metrics should not be confused with business value as the same application may be valued differently when deployed in different business contexts.
Quality is a quantity: it’s the probability that something will go amiss. As for any prediction about future events, quality, even set within a strictly demarcated context, could never be fully accounted for. That’s the reason why a sound quality management must clearly distinguish between, on one hand, outcomes whose contingency falls, to some degree, under project capability and responsibility and, on the other hand, risks whose origin can’t be rooted within project responsibilities.
Quality management must be abutted to two pillars, traceability and testability.
Traceability is a prerequisite for QM, whatever the nature of contingency. Testability means that every requirement, specification or product should be open to confirmation, verification or validation.
Reusing artifacts means using them in contexts that are different of their native ones. That may come by design, when specifications can anticipate on shared concerns, or as an afterthought, when initially unexpected similarities are identified later on.
Reuse deals with shared assets and mechanisms and is best achieved when managed according business, engineering or architecture perspectives:
- Business perspective: how to factor out and reuse artifacts associated with the knowledge of business domains when system functionalities or platforms are modified.
- Engineering perspective: how to reuse development artifacts when business contents or system platforms are modified.
- Architecture perspective: how to use system components independently of changes affecting business contents or development artifacts.
The objective of the Capability Maturity Model Integration (CMMI) is to assess and improve organization performances.
With regard to software development processes, the relevance of assessment fully depends on (1) objectives and unbiased indicators for products size and projects performances and, (2) transparent mapping between organizational alternatives and process outcomes. Lest those conditions are satisfied, capability and maturity assessments are to remain self-referencing.
Beyond turf arguments, the debate about Enterprise and Systems governance is rooted in the impact of systems architectures on decision-making.