EA Capacity & Maturity

Preamble

Appraising enterprises capability and maturity means navigating between the fuzzy depths of business models and the marked features of supporting systems.

Keeping in touch with environment (Urs Fischer)

Business capabilities are by nature a fleeting lot to assess, considering the innate diversity and volatility of circumstances and the fact that successes belong to exception more than rule. By contrast, the assessment of systems capabilities is much easier as they can be defined in architectural terms.

Digital transformation may help to solve the dilemma by dissolving it: given the merging of systems and organization, the key success factor for enterprises is their capacity to assess changes and opportunities and adjust their processes and architectures accordingly. On that account, performances are to depend on the dynamic alignment of enterprises representations (maps) with their business and technological environments (territories).

enterprise architecture & environments

Based on a broadly accepted architecture paradigm epitomised by the Zachman framework, and figured by the Pagoda blueprint, enterprise environments and architectures are to be defined along three levels (aka tiers or layers):

Enterprise must align their maps and territories
  • Data level, for digital environment, operations, and platforms; described by physical and analytical models.
  • Information level, for systems and engineering, described by logical and functional models.
  • Knowledge level, for business objectives and organization; described by conceptual and process models.

That framework can be used to assess enterprises capacity to change independently of business specificites.

DEALING WITH CHANGES

Whereas absolute measurements are tied to valuation contexts, relative ones can be ecumenical, hence the benefit of targeting the capacity to change instead of trying to measure capacity by itself.

As far as enterprise architectures are concerned, changes can originate from business or technological environments, the former at process level, the latter at application level.

To begin with, as much change as possible should be dealt with at application level through organizational, functional, or operational adjustments, without affecting architectural assets. That is to be achieved through architectures versatility and plasticity (aka agility) .

When architectural changes are needed, their footprint can be layered in terms of Model Driven Architecture (MDA) , i.e computation independent (CIM), platform independent (PIM), and platform specific (PSM) models.

Change in Enterprise Architectures

Ideally, changes should spread top-down from computation independent models to platform specific ones, with or without affecting platform independent ones. On that account, the footprint of changes rooted in technical environment should be circumscribed to platforms adaptations and charted by platform specific models.

By contrast, changes rooted in business environment could induce changes in any or all architecture layers:

  • Platform specific (PSM), e.g when business logic is implemented by rules engines.
  • Platform independent (PIM), e.g new business functions.
  • Computation independent (CIM), e.g new business processes.

That model-based approach is to be used to define enterprises changes in terms of entropy.

Entropy & capacity to change

Change is a matter of time, especially for business, and a delicate balance is to be achieved between assessments (which improve when given time until they become redundant), and commitments (which risk missing opportunities if kept waiting for too long).

The issues can be expressed by applying the OODA (Observation, Orientation, Decision, Action) loop to system engineering:

  • Changes in business and technology environments are observed at digital (e.g data mining) or conceptual level (e.g business intelligence) (a).
  • Assessment deals with the reliability of observations as well as their meaning with regard to enterprise objectives, organization, and systems (b).
  • Policies are updated and decisions made regarding the adjustment of objectives, resources, organization, or assets (c).
  • Decisions are implemented as technical and business commitments (d).

On that basis, the capacity to change is to depend on:

  • Osmosis: quality (accuracy, reliability,…), delay, and automaticity of observations with regard to changes in environments (data mining).
  • Operational traceability across decisions, actions and observations (process mining, verification).
  • Alignment of business and digital environments (validation).
  • Consistency of architecture models (CIMs, PIMs, PSMs).
  • Value chains and lean engineering: business logic (CIMs) directly embedded in software designs (PSMs).
Capacity to Change

With the benefits of digital transformation, these dimensions should be defined in terms of information processing.

osmosis & blind spots

The generalization of digital exchanges between enterprises and their environment brings back the concept of entropy, defined by cybernetics as the quantum of energy within a system that cannot be put to use. Applied to enterprises, entropy can be understood as a blind spot on environment data, arguably a critical hindrance to their capacity to move and adjust. The primary objective should therefore to minimize that blind spot, i.e to maximize the outcome of data processing .

As digital environments bring about level data playground, to get a competitive edge enterprises have to make a better sense of it; hence the importance of a distinction between data and information, the former obtained from the environment, the latter obtained through the processing of the former. On that basis, the capacity to change, defined as the opposite of entropy, is to be determined by the way data is processed into information.

Digital osmosis, i.e the exchange of digital data between enterprises operations and environments is clearly a primary factor: inbound streams can be mined as to provide comprehensive and timely snapshots, outbound streams can be weaved into business processes, enhancing the capacity to translate decisions into action.

Digital osmosis could also bolster lean engineering with the direct integration of (digital) business logic into software (e.g using rules engines), reinforcing the shortcuts between observations and decisions whenever orientation can be avoided. That would also enhance traceability between platform specific (PSMs) and operational models, as well as the monitoring of actions and the analysis of feedbacks (process mining).

Osmosis could be easily (if not accurately) estimated with the ratio between digitized flows and the whole of data flows, with measurements weighted by delays between observations and data processing.

A taxonomy of changes

Digital and business environments are not to be confused, and there is no reason to assume that their respective changes tally. As it happens, digital transformation may reinstall organizations at the nexus of changes by providing a powerful leverage on systems; and if entropy is considered, that is to be achieved through symbolic representations, aka models.

While names may vary, a distinction is generally made between changes according to their horizon, shorter for operational or tactical ones, longer for strategic ones. As so often with quantitative classifications, that understanding has been of limited use given the diversity of time-frames across industries. The digital transformation open the door to a qualitative approach, defining changes according to the nature of of their footprint:

  • From the business perspective, it should restate the primacy of organization for the harnessing of IT benefits.
  • From the architecture perspective, it would rank assets according to “digital modality”: symbolic (information, knowledge), tangible (e.g platforms), or a combination of both (functional architecture).
Changes can be defined according to the digital modality of their footprint

Taking the strategic perspective, changes in technical or economic factors are to affect assets and processes for a large and often undetermined number of production cycles; they must consequently be set in time-frames extending beyond managed horizons (actual chronologies depend on industries specificities). Despite regular updates, supporting models and hypothesis may lose their relevance during the intervals, with the resulting entropy hampering the assessment of opportunities and policies. Such discrepancies can be circumscribed if planned organization and computation independent models (CIMs) are systematically checked against observations.

Compared to strategic ones, functional changes can be aligned with a limited number of production cycles, which irons out most of the discrepancies between maps and territories associated with strategic planning. Instead, entropy could arise from a lack of transparency and traceability between the models used to map organization and processes (CIMs) to functional (PIMs) and technical (PSMs) architectures.

Finally, operational changes can be carried out at process level, without affecting architectures. In that case entropy could be the result of a misalignment of commitments and observations granularity.

On that basis, planning and managing changes in digital and business environments should be driven by models.

ENTROPY & maturity

As defined by cybernetics, entropy can be explained by the discrepancies between environment states (aka micro-states) and their representation (macro-states). Applied to enterprises environments and architectures, it would mean territories and operations for the former, maps and policies for the latter.

On that account the maturity of an organization should be assessed with regard to its ability to manage changes within and across environments:

  • Consistency of changes within environments: between territories and operations (digital environments), between maps and policies (business environments) .
  • Validity of changes across environments: between maps and territories, and between policies and operations.
Consistency is checked within environments, validity is checked across environments

Maturity levels could then be set according to the scope of managed entropy:

  1. Digital osmosis: all exchanges with environments come with digital counterpart sorted with regard to operational data, data attached to managed information, environment data detached from managed information.
  2. Digital value chains: digital integration of business and engineering processes ensuring transparency and traceability along value chains.
  3. Organizational pivot: value chains can be redefined as to make the best of information assets.
  4. Knowledge based architectures: enterprise layers (platforms, systems, organization) are aligned with data, information, and knowledge layers, leveraging the benefits of machine learning across enterprise organization and systems.

As it’s safe to assume that scaling maturity levels across enterprise architectures can only be carried out progressively, progresses have to be backed up by a comprehensive and ecumenical repository of physical and symbolic resources and assets, and be supported by workflows combining iterative (continuous and business driven) and model-based (phased, architecture driven) engineering processes .

FURTHER READING

Squared Outline: Business Capabilities

Despite (or because) their ubiquity across powerpoint presentations, business capabilities appear under a wide range of guises. Putting them in perspectives could clarify the issue.

To begin with, business capabilities should not be confused with systems ones as they are supposed to be driven by changing environments and opportunities, in contrast to continuity, integrity, and returns on investments. Were it not for the need to juggle with both there would be no need of chief “whatever” officers.

Then, if they are to be assessed across changing contexts and concerns, business capabilities should not be tied to specific ones but focus on enterprise wherewithal :

  • Material: capacity and maturity of platforms with regard to changes in business processes and environments.
  • Functional: capacity and maturity of supporting systems with regard to
    changes in business processes.
  • Organizational: versatility and plasticity of roles and processes in dealing with changes in business opportunities.
  • Intelligence: information assets and ability of people to use them.

These could then be adjusted with regard to finance and time:

  • Strategic: assets, to be deployed and paid for across a number of business exercices.
  • Tactical: resources, to be deployed and paid for within single
    business exercices.
  • Particular: combined assets and resources needed to support specific value chains.

The role of enterprise architects would then to plan, assess, and manage the dynamic alignment of business and architecture capabilities.

FURTHER READING

Squared Outline: Models As Currency

As every artifact, models can be defined by nature and function. With regard to nature, models are symbolic representations, descriptive (categories of actual instances) or prescriptive (blueprints of artifacts). With regard to function, models can be likened to currency, as they serve as means of exchange, instruments of measure, or repository.

Along that understanding, models can be neatly characterized by their intent:

  1. No use of models, direct exchange (barter) can be achieved between business analysts and software engineers.
  2. Models are needed as medium supporting exchange between organizational units with different business or technical concerns.
  3. Models are used to assess contents with regard to size, complexity, quality, …
  4. Models are kept and maintained for subsequent use or reuse.

Depending on organizations, providers and customers could then be identified, as well as modeling languages.

FURTHER READINGS

Squared Outline: Enterprise Architecture

Whatever their nature, architectures can be defined as structured collections of assets and mechanisms shared by a set of active entities with common purposes: houses for dwelling, factories for manufacturing processes, office buildings for administrative ones, human beings for living, etc.

EASquare_PbsSols
Layers of Problems & Solution

Along that reasoning enterprises architectures should be defined in terms of one distinction and three layers:

  1. A distinction between specific and changing business contexts and opportunities on one hand, shared and stable capabilities on the other hand (represented with the Zachman’s framework above).
  2. The enterprise layer deals with the representation of business environment and objectives (aka business model), organization and processes.
  3. The system layer deals with the functionalities of supporting systems independently of platforms.
  4. The platform layer deals with actual systems implementations.

It must be noted that while the layered perspective is widely agreed (names may differ), taxonomies often overlap.

Further Reading

Squared Outline: Layers

The immersion of enterprises into digital environments is blurring the traditional distinctions between architecture layers. Hence the need of clarifying the basic notions.


The Pagoda Architecture Blueprint is derived from the Zachman’s frameworks

Beyond the differences in terminologies (layers, levels, tiers, etc), four basic taxonomies can be applied:

  1. Enterprise architecture: business processes and organization, systems, platforms (Pagoda blueprint).
  2. Functional architecture: interfaces, control, persistency, services (Model/View/Controller).
  3. Representation: physical, logical, conceptual (Pagoda blueprint).
  4. Economic intelligence: data, information, knowledge

While some alignments are intrinsic, making explicit use of taxonomies is useful because they serve specific purposes.

n.b. The term “application layer” is usually defined in the context of communication architectures.

Further Reading

Squared Outline: Processes

While processes and activities are often defined together, the primary purpose of processes is to specify how activities are to be carried out if and when activities have to be executed separately.

Given the focus on execution, definitions of processes should be aligned with the classification of events and time, namely the nature of flows and coupling:

  1. No flows (computation).
  2. Information flows between activities.
  3. Actual flows between activities, asynchronous coupling.
  4. Actual flows, synchronous coupling.

That taxonomy could be used to define execution units in line with activities and the states of objects and expectations.

FURTHER READING

 

Value Chains & EA

Preamble

The seamless integration of enterprise systems into digital business environments calls for a resetting of value chains with regard to enterprise architectures, and more specifically supporting assets.

daniel-jacoby-tecnopor
Value Chains & Support (Daniel Jacoby)

Concerning value chains, the traditional distinction between primary and supporting activities is undermined by the generalization of digital flows, rapid changes in business environments, and the ubiquity of software agents. As for assets, the distinction could even disappear due to the intertwining of tangibles resources with organization,  information, and knowledge .

These difficulties could be overcome by bypassing activities and drawing value chains directly between business processes and systems capabilities.

From Activities to Processes

In theory value chains are meant to track down the path of added value across enterprise architectures; in practice their relevancy is contingent on specificity: fine when set along silos, less so if set across business functions. Moreover, value chains tied to static mappings of primary and support activities risk losing their grip when maps are redrawn, which is bound to happen more frequently with digitized business environments.

These shortcomings can be fixed by replacing primary activities by processes and support ones by system capabilities, and redefining value chains accordingly.

SOAb_P2Caps
Substituting processes and capabilities for primary and support activities

From Processes to Functions & Capabilities

Replacing primary and support activities with processes and functions doesn’t remove value chains primary issue, namely their path along orthogonal dimensions.

That’s not to say that business processes cannot be aligned with self-contained value chains, but insofar as large and complex enterprises are concerned, value chains are to be set across business functions. Thus the benefit of resetting the issue at enterprise architecture level.

Borrowing EA description from the Zachman framework, the mapping of processes to capabilities is meant to be carried out through functions, with business processes on one hand, architectures capabilities on the other hand.

If nothing can be assumed about the number of functions or the number of crossed processes, EA primary capabilities can be clearly identified, and functions classified accordingly, e.g: boundaries, control, entities, computation. That classification (non exclusive, as symbolized by the crossed pentagons) coincides with that nature of adjustments induced by changes in business environments:

  • Diversity and flexibility are to be expected for interfaces to systems’ clients (users, devices, or other systems) and triggering events, as to tally with channels and changes in business and technology environments.
  • Continuity is critical for the identification and semantics of business objects whose consistency and integrity have to be maintained along time independently of users and processes.
  • In between, changes in processes control and business logic should be governed by business opportunities independently of  channels or platforms.

SOAb_BP2VaCh
Mapping Processes to Architectures Functions & Capabilities

Processes, primary or otherwise, would be sliced according to the nature of supporting capabilities e.g: standalone (a), real-time (b), client-server (c), orchestration service (d), business rules (e), DB access (f).

Value chains could then be attached to business processes along these functional guidelines.

Tying Value Chains to Processes

Bypassing activities is not without consequences for the meaning of value chains as the original static understanding is replaced by a dynamic one: since value chains are now associated to specific operations, they are better understood as changes than absolute level. That semantic shift reflects the new business environment, with manufacturing and physical flows having been replaced by mixed (SW and HW) engineering and digital flows.

Set in a broader economic perspective, the new value chains could be likened to a marginal version of returns on capital (ROC), i.e the delta of some ratio between value and contributing assets.

Digital business environments may also made value chains easier to assess as changes can be directly traced to requirements at enterprise level, and more accurately marked across systems functionalities:

  • Logical interfaces (users or systems): business value tied to interactions with people or other systems.
  • Physical interfaces (devices): business value tied to real-time interactions.
  • Business logic: business value tied to rules and computations.
  • Information architecture: business value tied to systems information contents.
  • Processes architecture: business value tied to processes integration.
  • Platform configurations: business value tied to resources deployed.

SOAb_P2TAssets
Marking Value Chains in terms of changes

The next step is to frame value chains across enterprise architectures in order to map values to contributing assets.

Assets & Organization

Value chains are arguably of limited use without weighting assets contribution. On that account, a major (if underrated) consequence of digital environments is the increasing weight of intangible assets brought about by the merge of actual and information flows and the rising importance of economic intelligence.

For value chains, that shift presents a double challenge: first because of the intrinsic difficulty of measuring intangibles, then because even formerly tangible assets are losing their homogeneity.

Redefining value chains at enterprise architecture level may help with the assessment of intangibles by bringing all assets, tangible or otherwise, into a common frame, reinstating organization as its nexus:

  • From the business perspective, that framing restates the primacy of organization for the harnessing of IT benefits.
  • From the architecture perspective, the centrality of organization appears when assets are ranked according to modality: symbolic (e.g culture), physical (e.g platforms), or a combination of both.

SOAb_BP2Assets
Ranking assets according to modality

On that basis enterprise organization can be characterized by what it supports (above) and how it is supported (below). Given the generalization of digital environments and business flows, one could then take organization and information systems as proxies for the whole of enterprise architecture and draw value chains accordingly.

Value Chains & Assets

Trendy monikers may differ but information architectures have become a key success factor for enterprises competing in digital environments. Their importance comes from their ability to combine three basic functions:

  1. Mining the continuous flows of relevant and up-to-date data.
  2. Analyzing and transforming data, feeding the outcome to information systems
  3. Putting that information to use in operational and strategic decision-making processes.

A twofold momentum is behind that integration: with regard to feasibility, it can be seen as a collateral benefit of the integration of actual and digital flows; with regard to opportunity, it can give a decisive competitive edge when fittingly carried through. That makes information architecture a reference of choice for intangible assets.

SOAb_P2OAssets
Assets Contribution to Value Chains

Insofar as enterprise architecture is concerned, value chains can then be threaded through three categories of assets:

  • Tangibles: operational resources supporting processes execution
  • Organization: roles, tasks, and responsibilities associated to processes
  • Intangibles: data mining, information contents, business intelligence, knowledge management, and decision-making.

That approach would simultaneously meet with the demands of digital environments, and add practical meaning to enterprise architecture as a discipline.

Further Reading

 

Fitting Tools to Problems

Preamble

All too often choosing a development method is seen as a matter of faith, to be enforced uniformly whatever the problems at hand.

86.59
Tools and problems at hand (Jacob Lawrence)

As it happens, this dogmatic approach is not limited to procedural methodologies but also affect some agile factions supposedly immunized against such rigid stances.

A more pragmatic approach should use simple and robust principles to pick and apply methods depending on development problems.

Iterative vs Phased Development

Beyond the variety of methodological dogmas and tools, there are only two basic development patterns, each with its own merits.

Iterative developments are characterized by the same activity (or a group of activities) carried out repetitively by the same organizational unit sharing responsibility, until some exit condition (simple or combined) verified.

Phased developments are characterized by sequencing constraints between differentiated activities that may or may not be carried out by the same organizational units. It must be stressed that phased development models cannot be reduced to fixed-phase processes (e.g waterfall); as a corollary, they can deal with all kinds of dependencies (organizational, functional, technical, …) and be neutral with regard to implementations (procedural or declarative).

A straightforward decision-tree can so be built, with options set by ownership and dependencies:

EARoadmap_DeciTree

 

Shared Ownership: Agile Schemes

A project’s ownership is determined by the organizational entities that are to validate the requirements (stakeholders), and accept the products (users).

Iterative approaches, epitomized by the agile development model, is to be the default option for projects set under the authority of single organizational units, ensuring shared ownership and responsibility by business analysts and software engineers.

Projects set from a business perspective are rooted in business processes, usually through users’ stories or use cases. They are meant to be managed under the shared responsibility of business analysts and software engineers, and carried out independently of changes in architecture capabilities (a,b).

Cycles_AgiScal
Agile Development & Architecture Capabilities

Projects set from a system perspective potentially affect architectures capabilities. They are meant to be managed under the responsibility of systems architects and carried out independently of business applications (d,b,c).

Transparency and traceability between the two perspectives would be significantly enhanced through the use of normalized capabilities, e.g from the Zachman’s framework:

  • Who: enterprise roles, system users, platform entry points.
  • What: business objects, symbolic representations, objects implementation.
  • How: business logic, system applications, software components.
  • When: processes synchronization, communication architecture, communication mechanisms.
  • Where: business sites, systems locations, platform resources.

It must be noted that as far as architecture and business driven cycles don’t have to be synchronized (principle of continuous delivery), the agile development model can be applied uniformly; otherwise phased schemes must be introduced.

Cross Dependencies: Phased Schemes

Cross dependencies mean that, at some point during project life-cycle, decision-making may involve organizational entities from outside the team. Two mechanisms have traditionally been used to cope with the coordination across projects:

  • Fixed phases processes (e.g Analysis/Design/Implementation)  have shown serious shortcomings, as illustrated by notorious waterfall.
  • Milestones improve on fixed schemes by using check-points on development flows instead of predefined activities. Yet, their benefits remain limited if development flows are still defined with regard to the same top-down and one-fits-all activities.

Model based systems engineering (MBSE) offers a way out of the dilemma by defining flows directly from artifacts. Taking OMG’s model driven architecture (MDA) as example:

  • Computation Independent Models (CIMs) describe business objects and activities independently of supporting systems.
  • Platform Independent Models (PIMs) describe systems functionalities independently of platforms technologies.
  • Platform Specific Models (PSMs) describe systems components as implemented by specific technologies.

Layered Dependencies & Development Cycles with MDA

Projects can then be easily profiled with regard to footprints, dependencies, and iteration patterns (domain, service, platform, or architecture, …).

That understanding puts the light on the complementarity of agile and phased solutions, often known as scaled agile.

Further Reading

External Links

Enterprise Architecture Basic Tenets

Preamble

Humans often expect concepts to come with innate if vague meanings before being compelled to withstand endless and futile controversies around definitions. Going the other way would be a better option: start with differences, weed out irrelevant ones, and use remaining ones to advance.

tamarEttun2
How to keep business rolling (Tamar Ettun)

Concerning enterprise, it would start with the difference between business and architecture, and proceed with the wholeness of data, information, and knowledge.

Business Architecture is an Oxymoron

Business being about time and competition, success is not to be found in recipes but would depend on particularities with regard to objectives, use of resources, and timing. These drives are clearly at odds with architectures rationales for shared, persistent, and efficient structures and mechanisms. As a matter of fact, dealing with the conflicting nature of business and architecture concerns can be seen as a key success factor for enterprise architects, with information standing at the nexus.

Data as Resource, Information as Asset, Knowledge as Service

Paradoxically, the need of a seamless integration of data, information, and knowledge means that the distinction between them can no longer be overlooked.

  1. Data is captured through continuous and heterogeneous flows from a wide range of sources.
  2. Information is built by adding identity, structure, and semantics to data. Given its shared and persistent nature it is best understood as asset.
  3. Knowledge is information put to use through decision-making. As such it is best understood as a service.

Ensuring the distinction as well as the integration must be a primary concern of enterprise architects.

Sustainable Success Depends on a Balancing Act

Success in business is an unfolding affair, on one hand challenged by circumstances and competition, on the other hand to be consolidated by experience and lessons learnt. Meeting challenges while warding off growing complexity will depend on business agility and the versatility and plasticity of organizations and systems. That should be the primary objective of enterprise architects.

Further Reading

GDPR Ontological Primer

Preamble

European Union’s General Data Protection Regulation (GDPR), to come into effect  this month, is a seminal and momentous milestone for data privacy .

Nothing Personal (Arthur Szyk)

Yet, as reported by Reuters correspondents, European enterprises and regulators are not ready; more worryingly, few (except consultants) are confident about GDPR direction.

Misgivings and uncertainties should come as no surprise considering GDPR’s two innate challenges:

  • Regulating privacy rights represents a very ambitious leap into a digital space now at the core of corporate business strategies.
  • Compliance will not be put under a single authority but be overseen by an assortment of national and regional authorities across the European Union.

On that account, ontologies appear as the best (if not the only) conceptual approach able to bring contexts (EU nations), concerns (business vs privacy), and enterprises (organization and systems) into a shared framework.

A workbench built with the Caminao ontological kernel is meant to explore the scope and benefits of that approach, with a beta version (Protégé/OWL 2) available for comments on the Stanford/Protégé portal using the link: Caminao Ontological Kernel (CaKe_GDPR).

Enterprise Architectures & Regulations

Compared to domain specific regulations, GDPR  is a governance-oriented regulation set across business concerns and enterprise organization; but unlike similarly oriented ones like accounting, GDPR is aiming at the nexus of business competition, namely the processing of data into information and knowledge. With such a strategic stake, compliance is bound to become a game-changer cutting across business intelligence, production systems, and decision-making. Hence the need for an integrated, comprehensive, and consistent approach to the different dimensions involved:

  • Concepts upholding businesses, organizations, and regulations.
  • Documentation with regard to contexts and statutory basis.
  • Regulatory options and compliance assessments
  • Enterprise systems architecture and operations

Moreover, as for most projects affecting enterprise architectures, carrying through GDPR compliance is to involve continuous, deep, and wide ranging changes that will have to be brought off without affecting overall enterprise performances.

Ontologies arguably provide a conclusive solution to the problem, if only because there is no other way to bring code, models, documents, and concepts under a single roof. That could be achieved by using ontologies profiles to frame GDPR categories along enterprise architectures models and components.

CakeGDPR_00.jpg
Basic GDPR categories and concepts (black color) as framed by the Caminao Kernel

Compliance implementation could then be carried out iteratively across four perspectives:

  • Personal data and managed information
  • Lawfulness of activities
  • Time and Events
  • Actors and organization.

Data & Information

To begin with, GDPR defines ‘personal data’ as “any information relating to an identified or identifiable natural person (‘data subject’)”. Insofar as logic is concerned that definition implies an equivalence between ‘data’ and ‘information’, an assumption clearly challenged by the onslaught of big data: if proofs were needed, the Cambridge Analytica episode demonstrates how easy raw data can become a personal affair.

Ideally, a formal and functional distinction should be maintained between data and information, the former tied to transient observations of external instances, the latter defined by the categories of managed surrogates. 

Failing an enshrined distinction between data (directly associated to instances) and information (set according to categories or types), an ontological level of indirection should be managed between regulatory intents and the actual semantics of data as managed by information systems.

CakeGDPR_data
Managing the ontological gap between regulatory understandings and compliance footprints

Once lexical ambiguities set apart, the question is not so much about the data bases of well identified records than about the flows of data continuously processed: if identities and ownership are usually set upfront by business processes, attributions may have to be credited to enterprises know-how if and when carried out through data analytics.

Given that the distinctions are neither uniform, exclusive or final, ontologies will be needed to keep tabs on moves and motives. OWL 2 constructs (cf annex) could also help, first to map GDPR categories to relevant information managed by systems, second to sort out natural data from nurtured knowledge.

Activities & Purposes

Given footprints of personal data, the objective is to ensure the transparency and traceability of the processing activities subject to compliance.

Setting apart (see below for events) specific add-ons for notification and personal accesses,  charting compliance footprints is to be a complex endeavor: as there is no reason to assume some innate alignment of intended (regulation) and actual (enterprise) definitions, deciding where and when compliance should apply potentially calls for a review of all processing activities.

After taking into account the nature of activities, their lawfulness is to be determined by contexts (‘purpose limitation’ and ‘data minimization’) and time-frames (‘accuracy’ and ‘storage limitation’). And since lawfulness is meant to be transitive, a comprehensive map of the GDPR footprint is to rely on the logical traceability and transparency of the whole information systems, independently of GDPR.

That is arguably a challenging long-term endeavor, all the more so given that some kind of Chinese Wall has to be maintained around enterprise strategies, know-how, and operations. It ensues that an ontological level of indirection is again necessary between regulatory intents and effective processing activities.

Along that reasoning compliance categories, defined on their own, are first mapped to categories of functionalities (e.g authorization) or models (e.g use cases).

CakeGDPR_activ1
Compliance categories are associated upfront to categories of functionalities (e.g authorization) or models (e.g use cases).

Then, actual activities (e.g “rateCustomerCredit”) can be progressively brought into the compliance fold, either with direct associations with regulations or indirectly through associated models (e.g “ucRateCustomerCredit” use case).

CakeGDPR_activ2
Compliance as carried out through Use Case

The compliance backbone can be fleshed out using OWL 2 mechanisms (see annex) in order to:

  • Clarify the logical or functional dependencies between processing activities subject to compliance.
  • Qualify their lawfulness.
  • Draw equivalence, logical, or functional links between compliance alternatives.

That is to deal with the functional compliance of processing activities; but the most far-reaching impact of the regulation may come from the way time and events are taken into account.

Time & Events

As noted above, time is what makes the difference between data and information, and setting rules for notification makes that difference lawful. Moreover, by adding time constraints to the notifications of changes in personal data, regulators put systems’ internal events on the same standing as external ones. That apparently incidental departure echoes the immersion of systems into digitized business environments, making all time-scales equal whatever their nature. Such flattening is to induce crucial consequences for enterprise architectures.

That shift together with the regulatory intent are best taken into account by modeling events as changes in expectations, physical objects, processes execution, and symbolic objects, with personal data change belonging to the latter.

Gdpr events
Mapping internal (symbolic) and external (actual) events is a critical element of GDPR compliance

Putting apart events specific to GDPR (e.g data breaches), compliance with regard to accuracy and storage limitation regulations will require that all events affecting personal data:

  • Are set in time-frames, possibly overlapping.
  • Have notification constraints properly documented.
  • Have likelihood and costs of potential risks assessed.

As with data and activities, OWL 2 constructs are to be used to qualify compliance requirements.

Actors & Organization

GDPR introduces two specific categories of actors (aka roles): one (data subject) for natural persons, and one for actors set by organizations, either specifically for GDPR assignment, or by delegation to already defined actors.

Gdpr actors
GDPR roles can be set specifically or delegated

OWL 2 can then be used to detail how regulatory roles can be delegated to existing ones, enabling a smooth transition and a dynamic adjustment of enterprise organization with regulatory compliance.

It must be stressed that the semantic distinction between identified agents (e.g natural persons) and the roles (aka UML actors) they play in processes is of particular importance for GDPR compliance because who (or even what) is behind an actor interacting with a system is to remain unknown to the system until the actor can be authentically identified. If that ontological lapse is overlooked there is no way to define and deal with security, confidentiality or privacy regulations.

Conclusion

The use of ontologies brings clear benefits for regulators, enterprise governance, and systems architects.

Without shared conceptual guidelines chances are for the European regulatory orchestra to get lost in squabbles about minutiae before sliding into cacophony.

With regard to governance, bringing systems and regulations into a common conceptual framework is to enable clear and consistent compliance strategies and policies, as well as smooth learning curves.

With regard to architects, ontology-based compliance is to bring cross benefits and externalities, e.g from improved traceability and transparency of systems and applications.

Annex A: Mapping Regulations to Models (sample)

To begin with, OWL 2 can be used to map GDPR categories to relevant resources as managed by information systems:

  • Equivalence: GDPR and enterprise definitions coincide.
  • Logical intersection, union, complement: GDPR categories defined by, respectively, a cross, merge, or difference of enterprise definitions.
  • Qualified association between GDPR and enterprise categories.

Assuming the categories properly identified, the language can then be employed to define the sets of regulated instances:

  • Logical property restrictions, using existential and universal quantification.
  • Functional property restrictions, using joints on attributes values.

Other constructs, e.g cardinality or enumerations, could also be used for specific regulatory constraints.

Finally, some OWL 2 built-in mechanisms can significantly improve the assessment of alternative compliance policies by expounding regulations with regard to:

  • Equivalence, overlap, or complementarity.
  • Symmetry or asymmetry.
  • Transitivity
  • etc.

Annex B: Mapping Regulations to Capabilities

GDPR can be mapped to systems capabilities using well established Zachman’s taxonomy set by crossing architectures functionalities (Who,What,How, Where, When) and layers (business and organization), systems (logical structures and functionalities), and platforms (technologies).

Rules_GDPR
Regulatory Compliance vs Architectures Capabilities

These layers can be extended as to apply uniformly across external ontologies, from well-defined (e.g regulations) to fuzzy (e.g business prospects or new technologies) ones, e.g:

Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).

Such mapping is to significantly enhance the transparency of regulatory policies.

Further Reading

External Links