Lest it remains a philosophical concept complexity must be quantifiable, which entails symbolic representation.
Facts & Representations
Taking cue from cybernetics, complexity can be summarily defined as the discrepancy between environments (micro-states) and their representation (aka macro-states).
Environments: Facts, Data, Documents
Environments are perceived through data reflecting physical or symbolic facts. Facts associated with artifacts, physical or symbolic, come with defined identities (#) and features (≈), not so for the ones associated with natural phenomena. It ensues two kinds of data sets, one for facts with prior characterisation (≈), and the other for uncharacterised observations.
Facts don’t happen in a vacuum but must be observed, and therefore framed by documents representing canned symbolic legacy of prior observations, to be eventually consolidated with the ones freshly obtained from artefacts. Hence the two-tiered mapping of environment complexity:
- Nominal micro-states combine labelled observations from artefacts with previously documented observations; they reflect facts currently accounted for.
- Factual micro-states combine the whole of factual observations, accounted or unaccounted for.
Tackling environment (aka external) complexity should thus be carried out in two steps: semantic consolidation of terms used in documents and artefacts (1), and consolidation of terms used for all factual observations (2).
Representations: Intents & Purposes
Representations are symbolic artefacts made on purpose; with regard to environments purposes can be of two kinds, discriminative or generative, the former focused on identifying and naming relevant entities and variables, the latter on variables dependencies and causal chains.
Assuming networks are used to represent nominal and factual micro-states, descriptive approaches (d) use taxonomies to identify independent (aka explanatory) and dependant variables (÷), and build macro-states accordingly. Generative approaches (g) introduce meanings (≈) to transform nominal networks into thesauruses, and then build conceptual graphs by adding intents and purposes. Domains ensure the congruence of generative and descriptive approaches by defining boundaries around the managed representations of entities identified in environments (#).
More broadly, descriptive and generative approaches are congruent with complementary EA perspectives: information systems for the former, business intelligence for the latter.
Managing EA Complexity
Enterprise architectures are meant to balance two conflicting objectives: on the one hand to support effective and specific business processes set in competitive, and thus changing, environments; on the other hand to ensure the sustainability of shared assets and mechanisms. That balancing act can thus serve as a yardstick for EA assessment, which brings back cybernetics and entropy.
Complexity & Entropy
Reset in terms of enterprise architecture, entropy is best understood as the quantum of data that cannot be explained by information models and/or put to use as knowledge. While absolute assessment of available data (a) raises both theoretical and practical issues, relative ones may be easier for business (intelligence) data (b), data managed by information systems (c), and data common to both (d).
Without delving into technicalities, some metrics could be achieved for:
- Business intelligence data (b), through thesauruses defining the variables deemed relevant.
- Managed data (c), through taxonomies (≈) used to define categories and features supported by information systems
- Business entities (aka surrogates) managed by information systems (d), through the domains anchoring (#) representations to their counterpart in business environment
These rough measurements could be used to assess and manage EA complexity.
Managing EA Complexity
Enterprise architects can be seen as brokers tasked with the balancing of business opportunities and assets sustainability; borrowing from cybernetics, it would mean tackling environments’ shifting and unbounded entropy without increasing EA complexity. Setting apart comprehensive makeovers of both business models and enterprise architectures, balanced EA transformations can be typified by two main profiles.
Versatility ensures that rises in environment complexity do not induce significant increases in EA complexity. To that effect emerging changes in observations and business intelligence variables, as documented in thesauruses, are supposed to be handled through changes in taxonomies, and consequently the logic of business processes, leaving shared domains and mechanisms unaffected.
Plasticity by contrast ensures that changes in enterprise architectures aimed at reducing complexity are carried out without impairing the effectiveness of existing business processes. That can be best achieved by relocating versatility from architecture to services, taking advantage of polymorphism and encapsulation to isolate changes in systems architectures from business processes.
The effectiveness of such a two-pronged complexity management scheme can be leveraged by the immersion of enterprise architectures in digital environments, with osmosis rounding out the rough edges of external entropy (versatility), and homeostasis furthering the alignment of designed services and systems legacy (plasticity).
Further Reading
KALEIDOSCOPE SERIES
- Signs & Symbols
- Generative & General Artificial Intelligence
- Thesauruses, Taxonomies, Ontologies
- EA Engineering interfaces
- Ontologies Use cases
- Complexity
- Cognitive Capabilities
- LLMs & the Matter of Transparency
OTHER INTERNAL REFERENCES
- Caminao Framework Overview
- EA: Ontological Prisms & Digital Twins
- A Knowledge Engineering Framework
- Knowledge interoperability
- Edges of Knowledge
- The Pagoda Playbook
- ABC of EA: Agile, Brainy, Competitive
- Knowledge-driven Decision-making (1)
- Knowledge-driven Decision-making (2)
- Ontological Text Analysis: Example




