Hitting the ground running with Caminao is meant to be easy as it relies on a well-founded modeling paradigm (Stanford Symbolic Systems Program) and a solid and well established reference framework (Zachman’s).
Secured by these foundations, teams could carry on with agile and MBSE development models, helped, if and when necessary, by a comprehensive and consistent documentation freely available online.
Symbolic Systems Paradigm
The Stanford Symbolic System Program (SSP) is built on clear and incontrovertible evidence: the purpose of computer systems is to manage the symbolic counterparts (aka surrogates) of business objects and activities. Based on that understanding, enterprise architectures can be wholly and consistently defined by maps (the models) and territories (relevant business objects and activities and their symbolic counterparts in systems).
That paradigm is at the same time straightforward and aligned with the formal distinction between extensional and intensional representations, the former for requirements analysis (descriptive models), the latter for systems design (prescriptive models).
Given its clarity of purpose and concepts, the Zachman Framework is arguably a reference of choice; its core is defined by six columns and five lines, each of them associated with well known concepts:
Yet, the table arrangement comes with some discrepancies:
Lines mix architectures artifacts (2-4) with contexts (1) and instances (5).
While keeping the semantics intact, Caminao rearranges the artifacts lines into pentagons:
That simple transformation significantly improves the transparency of enterprise architectures while bringing a new light on dependencies set across layers and capabilities. As it happens, and not by chance, it neatly fits with the Pagoda architecture blueprint.
Digital Transformation & The Pagoda Blueprint
The generalization of digitical environments bears out the Symbolic System modeling paradigm: to stay competitive, enterprises have to manage a relevant, accurate, and up-to-date symbolic representation of their business context and concerns. On that account, consequences go well beyond a shallow transformation of business processes and call for an in-depth transformation of enterprise architectures that should put the focus on their capacity to refine business data flows into information assets supporting knowledge management and decision-making:
Ubiquitous, massive, and unrelentig digitized business flows cannot be dealt with lest a clear distinction is maintained between raw data acquired across platforms and the information (previously data) models ensuring the continuity and consistency of systems.
Once structured and refined, business data flows must be blended with information models sustaining systems functionalities.
A comprehensive and business driven integration of organization and knowldege could then support strategic and operational decision-making at enterprise level.
Such an information backbone set across architecture layers tallies with the Pagoda architecture blueprint well known for its resilience and adaptability in unsettled environments.
That blueprint can be much more than metaphoric: applied to enterprise architectures it would greatly enhance the traceability of transformations induced by digital environments.
Proceed With Agile & MBSE
Once set on track with a reliable paradigm and a clear reference framework, teams can carry on with their choice of development models:
Agile schemes for business driven applications for which conditions of shared ownership and continuous delivery can be met.
Phased schemes for architecture developments set across business processes.
Whatever the development methods, the modeling paradigm will put enterprise architecture and projects management on a principled basis, and the framework will significantly enhance their integration.
Artificial intelligence is generally defined in relation with the human ability to figure out situations and solve problems. The term may also put the focus on an hypothetical disembodied artificial brain.
Taking the brain perspective makes for easier outlining:
Artificial brains can build and process symbolic and non symbolic representations, respectively with semantic and neural networks.
Artificial brains can solve problems applying logic and abstraction or data analytics, respectively to symbolic and non symbolic representations.
Like human ones, artificial brains combine sensory-motor capabilities with purely cognitive ones.
Like human ones, and apart from sensory-motor capabilities, artificial brains support a degree of cognitive plasticity and versatility between symbolic and non symbolic representation and processing.
These generic capabilities are at the root of the wide-ranging and dramatic advances of machine learning.
The upsurge in the scope and performances of artificial brains sometimes brings a new light on human cognition. Semantic layers and knowledge graphs offer a good example of a return to classics, in that case with Greek philosophers’ ontologies.
According to their philosophical origins, ontologies are systematic accounts of existence for whatever can make sense in an universe of discourse. From that starting point four basic observations can be made:
Ontologies are structured set of names denoting symbolic (aka cognitive) representations.
These representations can stand at different epistemic levels: terms or labels associated to representations (nothing is represented), ideas or concepts (sets of terms), instances of identified objects or phenomena, categories (sets of instances), documents.
Ontologies are solely dedicated to the validity and internal consistency of the representations. Not being concerned with external validity, As they are not meant to support emprical purposes.
Yet, assuming a distinction between epistemic levels, ontologies can be used to support both internl and external consistency of models.
That makes models a refinement of ontologies as they are meant to be externally consistent and serve a purpose.
Deciding upon a framework is a choice that bears upon a wide range of enterprise activities and may induce profound transformations in organization and systems. It ensues that the outcome is practically settled at the outset: once on a wrong path the only option is often to cut the losses as soon as possible.
As it happens, there is no reason to hurry and every reason to take the time before opting for a long-term and comprehensive commitment.
Taking five, simple litmus tests can be used to avert overhasty moves into quagmires. Compared to a comprehensive assessment, the objective would only be to establish a shortlist of frameworks worth of a further assessment. For that purpose forms (but not substance) are to be considered for the consistency of definitions, the specificity of value propositions, and the reliability of roadmaps.
Consistency of Definitions
Frameworks come with glossaries covering generic, specific, and standard terms in various proportions; the soundness of these glossaries can be assessed independently of their meanings.
For that purpose simple metrics can be applied to a sample of definitions of core concepts, (e.g: agent, role, actor, event, object, activity, process, device, system, architecture, enterprise):
Self-contained: terms directly and fully defined, i.e without referring to other glossary terms (c,a,m).
Number of references used.
Circular references: terms including at least one path to itself (d,e,f)
Average length of non-circular references
Overlaps (i and j)
Conflicts (w by (x><e)
Even computed on small samples (10 to 50), these metrics should be enough to provide a sound yardstick.
Value Proposition Specifics
Pledges about meeting stakeholder needs, holistic undertaking, or enhanced agility are part and parcel of every pitch; but as declarations of intent they give little information about frameworks.
To be given consideration a framework should also be specific about its value proposition, in particular with regard to its description of enterprise architecture and systems engineering processes.
Assuming that the specificity of a value proposition equals the likelihood of a contrary stance (no framework would purport ignoring business needs or being non agile), general commitments must be supported by schemes meant to make a difference.
Given what is at stake, adopting a framework should only be considered if it can significantly reduce uncertainty. On that account road-maps built from pre-defined activities are deceptive as they rely on the implicit assumption that things will always be as they are meant to be. So, taking for granted that positive outcomes can never be guaranteed independently of circumstances, a framework reliability is to be assessed through its governance apparatus.
Depending on context and purpose requirements can be understood as customary documents, contents, or work in progress.
Given that requirements are the entry point of engineering processes, nothing should be assumed regarding their form (natural, formal, or specific language), or content (business, functional, non-functional, or technical).
Depending on the language used, requirements can be directly analyzed and engineered, or may have to be first formatted (aka captured).
Since requirements reflect biased and partial agendas, taxonomy, in particular the distinction between functional and non functional concerns, is in the eyes of the stakeholders.
Depending on content and context, requirements can be engineered as a whole (agile development models), or set apart as to tally with external dependencies (phased development models).