System modeling is all too often a flight for abstraction, when business analysts should instead look for the proper level of representation, ie the one with the best fit to business concerns.
Caminao’s blog (see Topics Guide) will try to set a path to Architecture Driven System Modelling. The guiding principle is to look at systems as sets of symbolic representations and identify the core archetypes defining how they must be coupled to their actual counterparts. That would provide for lean (need-to-know specs) and fit (architecture driven) models, architecture traceability, and built-in consistency checks.
This blog is meant to be a work in progress, with the basic concepts set open to suggestions or even refutation:
Enterprise governance has to face combined changes in the way business times and spaces are to be taken into account. On one hand social networks put well-thought-out market segments and well planned campaigns at the mercy of consumers’ weekly whims. On the other hand traditional fences between environments and IT systems are crumbling under combined markets and technological waves.
To overcome these challenges enterprises strategies should focus on four pillars:
Data and Information: massive and continuous inflows of data calls for a seamless integration of data analytics (perception), information models (reasoning), and knowledge (decision-making).
Security & Confidentiality: new regulatory environments and costs of privacy breaches call for a clear distinction between data tied to identified individuals and information associated to designed categories.
Innovation: digital environments induces a new order of magnitude for the pace of technological change. Making opportunities from changes can only be achieved through collaboration mechanisms harnessing enterprise knowledge management to environments intakes.
Despite (or because) their ubiquity across powerpoint presentations, business capabilities appear under a wide range of guises. Putting them in perspectives could clarify the issue.
To begin with, business capabilities should not be confused with systems ones as they are supposed to be driven by changing environments and opportunities, in contrast to continuity, integrity, and returns on investments. Were it not for the need to juggle with both there would be no need of chief “whatever” officers.
Then, if they are to be assessed across changing contexts and concerns, business capabilities should not be tied to specific ones but focus on enterprise wherewithal :
Material: capacity and maturity of platforms with regard to changes in business processes and environments.
Functional: capacity and maturity of supporting systems with regard to changes in business processes.
Organizational: versatility and plasticity of roles and processes in dealing with changes in business opportunities.
Intelligence: information assets and ability of people to use them.
These could then be adjusted with regard to finance and time:
Strategic: assets, to be deployed and paid for across a number of business exercices.
Tactical: resources, to be deployed and paid for within single business exercices.
Particular: combined assets and resources needed to support specific value chains.
The role of enterprise architects would then to plan, assess, and manage the dynamic alignment of business and architecture capabilities.
Distinctions must serve a purpose and be assessed accordingly. On that account, what would be the point of setting apart data and information, and on what basis could that be done.
Until recently the two terms seem to have been used indifferently; until, that is, the digital revolution. But the generalization of digital surroundings and the tumbling down of traditional barriers surrounding enterprises have upturned the playground as well as the rules of the game.
Previously, with data analytics, information modeling, and knowledge management mostly carried out as separate threads, there wasn’t much concerns about semantic overlaps; no more. Lest they fall behind, enterprises have to combine observation (data), reasoning (information), and judgment (knowledge) as a continuous process. But such integration implies in return more transparency and traceability with regard to resources (e.g external or internal) and objectives (e.g operational or strategic); that’s when a distinction between data and information becomes necessary.
Economics: Resources vs Assets
Understood as a whole or separately, there is little doubt that data and information have become a key success factor, calling for more selective and effective management schemes.
Being immersed in digital environments, enterprises first depend on accurate, reliable, and timely observations of their business surroundings. But in the new digital world the flows of data are so massive and so transient that mining meaningful and reliable pieces is by itself a decisive success factor. Next, assuming data flows duly processed, part of the outcome has to be consolidated into models, to be managed on a persistent basis (e.g customer records or banking transactions), the rest being put on temporary shelves for customary uses, or immediately thrown away (e.g personal data subject to privacy regulations). Such a transition constitutes a pivotal inflexion point for systems architectures and governance as it sorts out data resources with limited lifespan from information assets with strategic relevance. Not to mention the sensibility of regulatory compliance to data management.
Processes: Operations vs Intelligence
Making sense of data is pointless without putting the resulting information to use, which in digital environments implies a tight integration of data and information processing. Yet, as already noted, tighter integration of processes calls for greater traceability and transparency, in particular with regard to the origin and scope: external (enterprise business and organization) or internal (systems). The purposes of data and information processing can be squared accordingly:
The top left corner is where business models and strategies are meant to be defined.
The top right corner corresponds to traditional data or information models derived from business objectives, organization, and requirement analysis.
The bottom line correspond to analytic models for business (left) and operations (right).
That view illustrates the shift of paradigm induced by the digital transformation. Prior, most mappings would be set along straight lines:
Horizontally (same nature), e.g requirement analysis (a) or configuration management (b). With source and destination at the same level, the terms employed (data or information) have no practical consequence.
Vertically (same scope), e.g systems logical to physical models (c) or business intelligence (d). With source and destination set in the same semantic context the distinction (data or information) can be ignored.
The digital transformation makes room for diagonal transitions set across heterogeneous targets, e.g mapping data analytics with conceptual or logical models (e).
That double mix of levels and scopes constitutes the nexus of decision-making processes; their transparency is contingent on a conceptual distinction between data and information.
Data is used to align operations (systems) with observations (territories).
Information is used to align categories (maps) with objectives.
Then, the conceptual distinction between data and information is instrumental for the integration of operational and strategic decision-making processes:
Data analytics feeding business intelligence
Information modeling supporting operational assessment.
Not by chance, these distinctions can be aligned with architecture layers.
Architectures: Instances vs Categories
Blending data with information overlooks a difference of nature, the former being associated with actual instances (external observation or systems operations), the latter with symbolic descriptions (categories or types). That intrinsic difference can be aligned with architecture layers (resources are consumed, assets are managed), and decision-making processes (operations deal with instances, strategies with categories).
With regard to architectures, the relationship between instances (data) and categories (information) can be neatly aligned with capability layers, as represented by the Pagoda blueprint:
The platform layer deals with data reflecting observations (external facts) and actions (system operations).
The functional layer deals with information, i.e the symbolic representation of business and organization categories.
The business and organization layer defines the business and organization categories.
It must also be noted that setting apart what pertains to individual data independently of the informations managed by systems clearly props up compliance with privacy regulations.
With regard to decision-making processes, business intelligence uses the distinction to integrate levels, from operations to strategic planning, the former dealing with observations and operations (data), the latter with concepts and categories (information and knowledge).
Representations: Knowledge Architecture
As noted above, the distinction between data and information is a necessary counterpart of the integration of operational and intelligence processes; that implies in return to bring data, information, and knowledge under a common conceptual roof, respectively as resources, assets, and service:
Resources: data is captured through continuous and heterogeneous flows from a wide range of sources.
Assets: information is built by adding identity, structure, and semantics to data.
Services: knowledge is information put to use through decision-making.
Ontologies, which are meant to encompass all and every kind of knowledge, are ideally suited for the management of whatever pertains to enterprise architecture, thesaurus, models, heuristics, etc.
The significance of the distinction between data and information shows up at the two ends of the spectrum:
On one hand, it straightens the meaning of metadata, to be understood as attributes of observations independently of semantics, a dimension that plays a critical role in machine learning.
On the other hand, enshrining the distinction between what can be known of individuals facts or phenomena and what can be abstracted into categories is to enable an open and dynamic knowledge management, also a critical requisite for machine learning.
As demonstrated by a simple Google search, the MBSE acronym seems to be widely and consistently understood. Yet, the consensus about ‘M’ standing for models comes with different meanings for ‘S’ standing either for software or different kinds of systems.
In practice, the scope of model-based engineering has been mostly limited to design-to-code (‘S’ for software) and manufacturing (‘S’ for physical systems); leaving the engineering of symbolic systems like organizations largely overlooked.
Models, Software, & Systems
Models are symbolic representations of actual (descriptive models) or contrived (prescriptive models) domains. Applied to systems engineering, models are meant to serve specific purposes: requirements analysis, simulation, software design, etc. With software as the end-product of system engineering, design models can be seen as a special case of models characterized by target (computer code) and language (executable instructions). Hence the letter ‘S’ in the MBSE acronym, which can stand for ‘system’ as well as ‘software’,
As far as practicalities are considered, the latter is the usual understanding, specifically for the use of design models to generate code, either for software applications, or as part of devices combining software and hardware.
When enterprise systems are taken into consideration, such a limited perspective comes with consequences:
It puts the focus on domain specific implementations, ignoring the benefits for enterprise architecture.
It gives up on the conceptual debt between models of business and organization on one side, models of systems on the other side.
These stand in the path of the necessary integration of enterprises architectures immersed into digital environments.
Organizations as Symbolic Systems
As social entities enterprises are set in symbolic realms: organizational, legal, and monetary. Now, due the digital transformation, even their operations are taking a symbolic turn. So, assuming models could be reinstated as abstractions at enterprise level, MBSE would become the option of choice, providing a holistic view across organizations and systems (conceptual and logical models) while encapsulating projects and applications (design models).
That distinction between symbolic and actual alignments, the former with conceptual and logical models set between organization and systems, the latter with design models set between projects and applications, is the cornerstone of enterprise architecture. Hence the benefits of implementing it through model based system engineering.
While MBSE frameworks supporting the final cycle of engineering (from design downstream) come with a proven track record, there is nothing equivalent upstream linking business and organization to systems, except for engineering silos using domain specific languages. Redefined in terms of enterprise architecture abstractions, MBSE could bring leveraged benefits all along the development process independently of activity, skills, organization or methods, for enterprises as well as services and solutions providers.
As a modeling framework, it would enhance the traceability and transparency for products (quality) as well as processes (delays and budgets) along and across supply chains.
‘S’ For Service
Implemented as a service, MBSE could compound the benefits of cloud-based environments (accessibility, convenience, security, etc.), and could also be customized without undermining interoperability.
To that end, MBSE as a service could be reframed in terms of:
Customers (projects): services should address cross-organizational and architecture concerns, from business intelligence to code optimization, and from portfolio management to components release.
Policy (processes): services should support full neutrality with regard to organizations and methods, which implies that tasks and work units should be defined only with regard to the status of artifacts.
Contracts (work units and outcomes): services are to support the definition of work units and the assessment of outcomes:
Work units are to be defined bottom-up from artefacts.
Outcomes are to be assessed with regard to work units
Value in Models Transformations:
Transparency and Traceability: Two distinct model sets – Architecture Models and Implementation Models.
Endpoints (collaboration): if services are to be neutral with regard to the way they are provided, the collaboration between the wide range of is to be managed accordingly; that can only be achieved through a collaboration framework built on layered and profiled ontologies.
As a concluding remark, cross-breeding MBSE with Software as a Service (SaaS) could help to integrate systems and knowledge architectures, paving the way to a comprehensive deployment of machine learning technologies.
As every artifact, models can be defined by nature and function. With regard to nature, models are symbolic representations, descriptive (categories of actual instances) or prescriptive (blueprints of artifacts). With regard to function, models can be likened to currency, as they serve as means of exchange, instruments of measure, or repository.
Along that understanding, models can be neatly characterized by their intent:
No use of models, direct exchange (barter) can be achieved between business analysts and software engineers.
Models are needed as medium supporting exchange between organizational units with different business or technical concerns.
Models are used to assess contents with regard to size, complexity, quality, …
Models are kept and maintained for subsequent use or reuse.
Depending on organizations, providers and customers could then be identified, as well as modeling languages.
As it’s the case of every measurement, software engineering metrics must be defined by clear targets and purposes, and using them shouldn’t affect neither of them.
On that account, a clear distinction should be maintain between business value (set independently of supporting systems), the size and complexity of functionalities, and the work effort needed for their development. As far as systems are concerned, the Function Points approach can be defined with regard to the nature of requirements (business or system), and their scope (primary for artifact, adjustment for architecture):
Measures of business requirements are based on intrinsic domain complexity (domains function points, or DFP), adjusted for activities (adjustment function point, or AFP); they are set at artifact level independently of operational constraints or supporting systems.
Business requirements metrics are added up and adjusted for operational constraints.
Functional requirements measures target the subset of business requirements meant to be supported by systems. As such they are best defined at use case level (use case function points (UCFP).
Metrics for quality of service may be specific to functionalities or contingent on architectures and operational constraints.
Whatever the difficulties of implementation, function points remain the only principled approach to software and systems assessment, and consequently to reliable engineering costs/benefits analysis and planning.
The Agile development model should not be seen as a panacea or identified with specific methodologies. Instead it should be understood as a default option to be applied whenever phased solutions can be factored out.
Alternative: When conditions cannot be met, i.e when phased solutions are required, model-based system engineering frameworks should be used to integrate business-driven projects (agile) with architecture oriented ones (phased).