Brief Outline: Processes

Enterprises can be defined in terms of systems and processes, the former supporting the latter.

planbataille
Activities may have to be executed separately

As for systems, processes can be initially characterized using four basic yardsticks.

  1. The prime purpose of processes is not to specify activities, only the way they must be carried out if and when activities have to be executed separately.
  2. Separate execution implies differentiated activities.
  3. Separate execution can be dictated by intrinsic constraints, organizational context, or resources availability.
  4. Iterative processes and MBSE can be seen as marking processes conceptual limits, the former because a single activity (possibly composite) is repeated, the latter because models transformations can, at least in theory, be carried out independently of contexts and time-frames.

Whereas these four tenets are not enough to define processes, they should serve as necessary foundations.

FURTHER READING

Brief Outline: Symbolic Systems

“Computer systems, robots, and people are all examples of symbolic systems, agents that use meaningful symbols to represent the world around them so as to communicate and generally act in the world,”

 (Stanford University’s Symbolic Systems Program)

Symbolic Representations Are Concrete Objects (Albert) 

Most of misconceptions about IT systems can be corrected with a proper understanding of symbolic representations:

  1. Symbolic objects are concrete.
  2. Symbolic representations are symbolic objects tied to objects, agents, or phenomena, physically or socially identified within the domain considered.
  3. Surrogates are symbolic representations used to reflect the state of their counterpart in domains.
  4. Systems are containers used to manage surrogates.

These four simple tenets can be used as the pillars and the wheels of the whole discipline.

FURTHER READING

 

 

Focus: Users’ Stories & Use Cases

Preamble

Agile and phased development solutions are meant to solve different problems and therefore differ with artifacts and activities; that can be illustrated by requirements, understood as words in progress for the former, etched statements for the latter.

Kara_Walker_mural2
Running Stories (Kara Walker)

Ignoring that distinction is to make stories stutter from hiccupped iterations, or phases sputter along ripped milestones.

Agile & Phased Tell Different Stories Differently

As illustrated by ill-famed waterfall, assuming that requirements can be fully set upfront often put projects at the hazards of premature commitments; conversely, giving free rein to expectations could put requirements on collision courses.

That apparent dilemma can generally be worked out by setting apart business outlines from users’ stories, the latter to be scripted and coded on the fly (agile), the former analysed and documented as a basis for further developments (phased). To that end project managers must avoid a double slip:

  • Mission creep: happens when users’ stories are mixed with business models.
  • Jump to conclusions: happens when enterprise business cases prevail over the specifics of users’ concerns.

Interestingly, the distinction between purposes (users concerns vs business functions) can be set along one between language semantics (natural vs modeling).

Semantics: Capture vs Analysis

Beyond methodological contexts (agile or phased), a clear distinction should be made between requirements capture (c) and modeling (m): contrary to the former which translates sequential specifications from natural to programming (p) languages without breaking syntactic and semantic continuity, the latter carries out a double translation for dimension (sequence vs layout) and language (natural vs modeling.)

Scratch_lang
Semantic continuity (c>p) and discontinuity (c>m>p)

The continuity between natural and programming languages is at the root of the agile development model, enabling users’ stories to be iteratively captured and developed into code without intermediate translations.

That’s not the case with modeling languages, because abstractions introduce a discontinuity. As a corollary, requirements analysis is to require some intermediate models in order to document translations.

The importance of discontinuity can be neatly demonstrated by the use of specialization and generalization in models: the former taking into account new features to characterize occurrences (semantic continuity), the latter consolidating the meaning of features already defined (semantic discontinuity).

Confusion may arise when users’ stories are understood as a documented basis for further developments; and that confusion between outcomes (coding vs modeling) is often compounded by one between intents (users concerns vs business cases).

Concerns: Users’ Stories vs Business Cases

As noted above, users’ stories can be continuously developed into code because a semantic continuity can be built between natural and programming languages statements. That necessary condition is not a sufficient one because users’ stories have also to stand as complete and exclusive basis for applications.

Such a complete and exclusive mapping to application is de-facto guaranteed by continuous and incremental development, independently of the business value of stories. Not so with intermediate models which, given the semantic discontinuity, may create back-doors for broader concerns, e.g when some features are redefined through generalization. Hence the benefits of a clarity of purpose:

  • Users’ stories stand for specific requirements meant to be captured and coded by increments. Documentation should be limited to application maintenance and not confused with analysis models.
  • Use cases should be introduced when stories are to be consolidated or broader concerns factored out , e.g the consolidation of features or business cases.

Sorting out the specifics of users concerns while keeping them in line with business models is at the core of business analysts job description. Since that distinction is seldom directly given in requirements, it could be made easier if aligned on modeling options: stories and specialization for users concerns, models and generalization for business features.

From Stories to Cases

The generalization of digital environments entails structural and operational adjustments within enterprise architectures.

At enterprise level the integration of homogeneous digital flows and heterogeneous symbolic representations can be achieved through enterprise architectures and profiled ontologies. But that undertaking is contingent on the way requirements are first dealt with, namely how the specifics of users’ needs are intertwined with business designs.

As suggested above, modeling schemes could help to distinguish as well as consolidate users narratives and business outlooks, capturing the former with users’ stories and the latter with use cases models.

Scratch_USC1
Use cases describe the part played by systems taking into account all supported stories.

That would neatly align means (part played by supporting systems) with ends (users’ stories vs business cases):

  • Users’ stories describe specific objectives independently of the part played by supporting systems.
  • Use cases describe the part played by systems taking into account all supported stories.

It must be stressed that this correspondence is not a coincidence: the consolidation of users’ stories into broader business objectives becomes a necessity when supporting systems are taken into account, which is best done with use cases.

Aligning Stories with Cases

Stories and models are orthogonal descriptions, the former being sequenced, the latter laid out; it ensues that correspondences can only be carried out for individuals uniformly identified (#) at enterprise and systems level, specifically: roles (aka actors), events, business objects, and execution units.

Scratch_US2C
Crossing cases with stories: events, roles, business objects, and execution units must be uniformly and consistently identified (#) .

It must be noted that this principle is supposed to apply independently of the architectures or methodologies considered.

With continuity and consistency of identities achieved at architecture level, the semantic discontinuity between users’ stories and models (classes or use cases) can be managed providing a clear distinction is maintained between:

  • Modeling abstractions, introduced by requirements analysis and applied to artifacts identified at architecture level.
  • The semantics of attributes and operations, defined by users’ stories and directly mapped to classes or use cases features.
Scratch_USC3
From Capture to Analysis: Abstractions introduce a semantic discontinuity

Finally, stories and cases need to be anchored to epics and enterprise architecture.

Business Cases & Enterprise Stories

Likening epics to enterprise stories would neatly frame the panoply of solutions:

  • At process level users’ stories and use cases would be focused respectively on specific business concerns and supporting applications.
  • At architecture level epics and business cases would deal respectively with business models and objectives,  and supporting systems capabilities.
EASquare_stocas
A Squared frame for enterprise architectures governance

That would provide a simple yet principled basis for enterprise architectures governance.

Further Reading

External Links

Fitting Tools to Problems

Preamble

All too often choosing a development method is seen as a matter of faith, to be enforced uniformly whatever the problems at hand.

86.59
Tools and problems at hand (Jacob Lawrence)

As it happens, this dogmatic approach is not limited to procedural methodologies but also affect some agile factions supposedly immunized against such rigid stances.

A more pragmatic approach should use simple and robust principles to pick and apply methods depending on development problems.

Iterative vs Phased Development

Beyond the variety of methodological dogmas and tools, there are only two basic development patterns, each with its own merits.

Iterative developments are characterized by the same activity (or a group of activities) carried out repetitively by the same organizational unit sharing responsibility, until some exit condition (simple or combined) verified.

Phased developments are characterized by sequencing constraints between differentiated activities that may or may not be carried out by the same organizational units. It must be stressed that phased development models cannot be reduced to fixed-phase processes (e.g waterfall); as a corollary, they can deal with all kinds of dependencies (organizational, functional, technical, …) and be neutral with regard to implementations (procedural or declarative).

A straightforward decision-tree can so be built, with options set by ownership and dependencies:

EARoadmap_DeciTree

 

Shared Ownership: Agile Schemes

A project’s ownership is determined by the organizational entities that are to validate the requirements (stakeholders), and accept the products (users).

Iterative approaches, epitomized by the agile development model, is to be the default option for projects set under the authority of single organizational units, ensuring shared ownership and responsibility by business analysts and software engineers.

Projects set from a business perspective are rooted in business processes, usually through users’ stories or use cases. They are meant to be managed under the shared responsibility of business analysts and software engineers, and carried out independently of changes in architecture capabilities (a,b).

Cycles_AgiScal
Agile Development & Architecture Capabilities

Projects set from a system perspective potentially affect architectures capabilities. They are meant to be managed under the responsibility of systems architects and carried out independently of business applications (d,b,c).

Transparency and traceability between the two perspectives would be significantly enhanced through the use of normalized capabilities, e.g from the Zachman’s framework:

  • Who: enterprise roles, system users, platform entry points.
  • What: business objects, symbolic representations, objects implementation.
  • How: business logic, system applications, software components.
  • When: processes synchronization, communication architecture, communication mechanisms.
  • Where: business sites, systems locations, platform resources.

It must be noted that as far as architecture and business driven cycles don’t have to be synchronized (principle of continuous delivery), the agile development model can be applied uniformly; otherwise phased schemes must be introduced.

Cross Dependencies: Phased Schemes

Cross dependencies mean that, at some point during project life-cycle, decision-making may involve organizational entities from outside the team. Two mechanisms have traditionally been used to cope with the coordination across projects:

  • Fixed phases processes (e.g Analysis/Design/Implementation)  have shown serious shortcomings, as illustrated by notorious waterfall.
  • Milestones improve on fixed schemes by using check-points on development flows instead of predefined activities. Yet, their benefits remain limited if development flows are still defined with regard to the same top-down and one-fits-all activities.

Model based systems engineering (MBSE) offers a way out of the dilemma by defining flows directly from artifacts. Taking OMG’s model driven architecture (MDA) as example:

  • Computation Independent Models (CIMs) describe business objects and activities independently of supporting systems.
  • Platform Independent Models (PIMs) describe systems functionalities independently of platforms technologies.
  • Platform Specific Models (PSMs) describe systems components as implemented by specific technologies.
Layered Dependencies & Development Cycles with MDA

Projects can then be easily profiled with regard to footprints, dependencies, and iteration patterns (domain, service, platform, or architecture, …).

That understanding puts the light on the complementarity of agile and phased solutions, often known as scaled agile.

Further Reading

External Links

A Brief Ontology Of Time

“Clocks slay time… time is dead as long as it is being clicked off by little wheels; only when the clock stops does time come to life.”

William Faulkner

Preamble

The melting of digital fences between enterprises and business environments is putting a new light on the way time has to be taken into account.

Joseph_Koudelka_time
Time is what happens between events (Josef Koudelka)

The shift can be illustrated by the EU GDPR: by introducing legal constraints on the notifications of changes in personal data, regulators put systems’ internal events on the same standing as external ones and make all time-scales equal whatever their nature.

Ontological Limit of WC3 Time Recommendation

The W3C recommendation for OWL time description is built on the well accepted understanding of temporal entity, duration, and position:

Cake_time

While there isn’t much to argue with what is suggested, the puzzle comes from what is missing, namely the modalities of time: the recommendation makes use of calendars and time-stamps but ignores what is behind, i.e time ontological dimensions.

Out of the Box

As already expounded (Ontologies & Enterprise Architecture) ontologies are at their best when a distinction can be maintained between representation and semantics. That point can be illustrated here by adding an ontological dimension to the W3C description of time:

  1. Ontological modalities are introduced by identifying (#) temporal positions with regard to a time-frame.
  2. Time-frames are open-ended temporal entities identified (#) by events.
Cake_timeOnto
How to add ontological modalities to time

It must be noted that initial truth-preserving properties still apply across ontological modalities.

Conclusion: OWL Descriptions Should Not Be Confused With Ontologies

Languages are meant to combine two primary purposes: communication and symbolic representation, some (e.g natural, programming) being focused on the former, other (e.g formal, specific) on the latter.

The distinction is somewhat blurred with languages like OWL (Web Ontology Language) due to the versatility and plasticity of semantic networks.

Ontologies and profiles are meant to target domains, profiles and domains are modeled with languages, including OWL.

That apparent proficiency may induce some confusion between languages and ontologies, the former dealing with the encoding of time representations, the latter with time modalities.

Further Readings

External Links

GDPR Ontological Primer

Preamble

European Union’s General Data Protection Regulation (GDPR), to come into effect  this month, is a seminal and momentous milestone for data privacy .

Nothing Personal (Arthur Szyk)

Yet, as reported by Reuters correspondents, European enterprises and regulators are not ready; more worryingly, few (except consultants) are confident about GDPR direction.

Misgivings and uncertainties should come as no surprise considering GDPR’s two innate challenges:

  • Regulating privacy rights represents a very ambitious leap into a digital space now at the core of corporate business strategies.
  • Compliance will not be put under a single authority but be overseen by an assortment of national and regional authorities across the European Union.

On that account, ontologies appear as the best (if not the only) conceptual approach able to bring contexts (EU nations), concerns (business vs privacy), and enterprises (organization and systems) into a shared framework.

A workbench built with the Caminao ontological kernel is meant to explore the scope and benefits of that approach, with a beta version (Protégé/OWL 2) available for comments on the Stanford/Protégé portal using the link: Caminao Ontological Kernel (CaKe_GDPR).

Enterprise Architectures & Regulations

Compared to domain specific regulations, GDPR  is a governance-oriented regulation set across business concerns and enterprise organization; but unlike similarly oriented ones like accounting, GDPR is aiming at the nexus of business competition, namely the processing of data into information and knowledge. With such a strategic stake, compliance is bound to become a game-changer cutting across business intelligence, production systems, and decision-making. Hence the need for an integrated, comprehensive, and consistent approach to the different dimensions involved:

  • Concepts upholding businesses, organizations, and regulations.
  • Documentation with regard to contexts and statutory basis.
  • Regulatory options and compliance assessments
  • Enterprise systems architecture and operations

Moreover, as for most projects affecting enterprise architectures, carrying through GDPR compliance is to involve continuous, deep, and wide ranging changes that will have to be brought off without affecting overall enterprise performances.

Ontologies arguably provide a conclusive solution to the problem, if only because there is no other way to bring code, models, documents, and concepts under a single roof. That could be achieved by using ontologies profiles to frame GDPR categories along enterprise architectures models and components.

CakeGDPR_00.jpg
Basic GDPR categories and concepts (black color) as framed by the Caminao Kernel

Compliance implementation could then be carried out iteratively across four perspectives:

  • Personal data and managed information
  • Lawfulness of activities
  • Time and Events
  • Actors and organization.

Data & Information

To begin with, GDPR defines ‘personal data’ as “any information relating to an identified or identifiable natural person (‘data subject’)”. Insofar as logic is concerned that definition implies an equivalence between ‘data’ and ‘information’, an assumption clearly challenged by the onslaught of big data: if proofs were needed, the Cambridge Analytica episode demonstrates how easy raw data can become a personal affair. Hence the need to keep an ontological level of indirection between regulatory intents and the actual semantics of data as managed by information systems.

CakeGDPR_data
Managing the ontological gap between regulatory understandings and compliance footprints

Once lexical ambiguities set apart, the question is not so much about the data bases of well identified records than about the flows of data continuously processed: if identities and ownership are usually set upfront by business processes, attributions may have to be credited to enterprises know-how if and when carried out through data analytics.

Given that the distinctions are neither uniform, exclusive or final, ontologies will be needed to keep tabs on moves and motives. OWL 2 constructs (cf annex) could also help, first to map GDPR categories to relevant information managed by systems, second to sort out natural data from nurtured knowledge.

Activities & Purposes

Given footprints of personal data, the objective is to ensure the transparency and traceability of the processing activities subject to compliance.

Setting apart (see below for events) specific add-ons for notification and personal accesses,  charting compliance footprints is to be a complex endeavor: as there is no reason to assume some innate alignment of intended (regulation) and actual (enterprise) definitions, deciding where and when compliance should apply potentially calls for a review of all processing activities.

After taking into account the nature of activities, their lawfulness is to be determined by contexts (‘purpose limitation’ and ‘data minimization’) and time-frames (‘accuracy’ and ‘storage limitation’). And since lawfulness is meant to be transitive, a comprehensive map of the GDPR footprint is to rely on the logical traceability and transparency of the whole information systems, independently of GDPR.

That is arguably a challenging long-term endeavor, all the more so given that some kind of Chinese Wall has to be maintained around enterprise strategies, know-how, and operations. It ensues that an ontological level of indirection is again necessary between regulatory intents and effective processing activities.

Along that reasoning compliance categories, defined on their own, are first mapped to categories of functionalities (e.g authorization) or models (e.g use cases).

CakeGDPR_activ1
Compliance categories are associated upfront to categories of functionalities (e.g authorization) or models (e.g use cases).

Then, actual activities (e.g “rateCustomerCredit”) can be progressively brought into the compliance fold, either with direct associations with regulations or indirectly through associated models (e.g “ucRateCustomerCredit” use case).

CakeGDPR_activ2
Compliance as carried out through Use Case

The compliance backbone can be fleshed out using OWL 2 mechanisms (see annex) in order to:

  • Clarify the logical or functional dependencies between processing activities subject to compliance.
  • Qualify their lawfulness.
  • Draw equivalence, logical, or functional links between compliance alternatives.

That is to deal with the functional compliance of processing activities; but the most far-reaching impact of the regulation may come from the way time and events are taken into account.

Time & Events

As noted above, time is what makes the difference between data and information, and setting rules for notification makes that difference lawful. Moreover, by adding time constraints to the notifications of changes in personal data, regulators put systems’ internal events on the same standing as external ones. That apparently incidental departure echoes the immersion of systems into digitized business environments, making all time-scales equal whatever their nature. Such flattening is to induce crucial consequences for enterprise architectures.

That shift together with the regulatory intent are best taken into account by modeling events as changes in expectations, physical objects, processes execution, and symbolic objects, with personal data change belonging to the latter.

Gdpr events
Mapping internal (symbolic) and external (actual) events is a critical element of GDPR compliance

Putting apart events specific to GDPR (e.g data breaches), compliance with regard to accuracy and storage limitation regulations will require that all events affecting personal data:

  • Are set in time-frames, possibly overlapping.
  • Have notification constraints properly documented.
  • Have likelihood and costs of potential risks assessed.

As with data and activities, OWL 2 constructs are to be used to qualify compliance requirements.

Actors & Organization

GDPR introduces two specific categories of actors (aka roles): one (data subject) for natural persons, and one for actors set by organizations, either specifically for GDPR assignment, or by delegation to already defined actors.

Gdpr actors
GDPR roles can be set specifically or delegated

OWL 2 can then be used to detail how regulatory roles can be delegated to existing ones, enabling a smooth transition and a dynamic adjustment of enterprise organization with regulatory compliance.

It must be stressed that the semantic distinction between identified agents (e.g natural persons) and the roles (aka UML actors) they play in processes is of particular importance for GDPR compliance because who (or even what) is behind an actor interacting with a system is to remain unknown to the system until the actor can be authentically identified. If that ontological lapse is overlooked there is no way to define and deal with security, confidentiality or privacy regulations.

Conclusion

The use of ontologies brings clear benefits for regulators, enterprise governance, and systems architects.

Without shared conceptual guidelines chances are for the European regulatory orchestra to get lost in squabbles about minutiae before sliding into cacophony.

With regard to governance, bringing systems and regulations into a common conceptual framework is to enable clear and consistent compliance strategies and policies, as well as smooth learning curves.

With regard to architects, ontology-based compliance is to bring cross benefits and externalities, e.g from improved traceability and transparency of systems and applications.

Annex A: Mapping Regulations to Models (sample)

To begin with, OWL 2 can be used to map GDPR categories to relevant resources as managed by information systems:

  • Equivalence: GDPR and enterprise definitions coincide.
  • Logical intersection, union, complement: GDPR categories defined by, respectively, a cross, merge, or difference of enterprise definitions.
  • Qualified association between GDPR and enterprise categories.

Assuming the categories properly identified, the language can then be employed to define the sets of regulated instances:

  • Logical property restrictions, using existential and universal quantification.
  • Functional property restrictions, using joints on attributes values.

Other constructs, e.g cardinality or enumerations, could also be used for specific regulatory constraints.

Finally, some OWL 2 built-in mechanisms can significantly improve the assessment of alternative compliance policies by expounding regulations with regard to:

  • Equivalence, overlap, or complementarity.
  • Symmetry or asymmetry.
  • Transitivity
  • etc.

Annex B: Mapping Regulations to Capabilities

GDPR can be mapped to systems capabilities using well established Zachman’s taxonomy set by crossing architectures functionalities (Who,What,How, Where, When) and layers (business and organization), systems (logical structures and functionalities), and platforms (technologies).

Rules_GDPR
Regulatory Compliance vs Architectures Capabilities

These layers can be extended as to apply uniformly across external ontologies, from well-defined (e.g regulations) to fuzzy (e.g business prospects or new technologies) ones, e.g:

Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).

Such mapping is to significantly enhance the transparency of regulatory policies.

Further Reading

External Links

Focus: Individuals in Models

Preamble

Models are meant to characterize categories of given (descriptive models), designed (prescriptive models), or hypothetical (predictive models) individuals.

Xian-Lintong-terracotta-soldiers f6.jpg
Identity Matter (Terracotta Soldiers of Emperor Qin Shi Huang) 

Insofar as purposes can be kept apart, the discrepancies in targeted individuals can be ironed out, notably by using power-types. Otherwise, e.g if modeling concerns mix business analysis with software engineering, meta-models are introduced as jack-of-all-trades to deal with mixed semantics.

But meta-models generate exponential complexity when used across domains, not to mention the open and fuzzy ones of business intelligence. What is at stake can be better understood through the way individuals are identified and represented.

Partitions & Abstraction

Since models are meant to classify instances with regard to concerns, mixing concerns is to entail mixed classifications, horizontally across domains (e.g business and accounting), or vertically along engineering cycles (e.g business and engineering).

That can be achieved with power-types, meta-classes, or ontologies.

Delegation & Power-types

Given that categories (or classes or types) represent set of instances (given, designed, or simulated), they may by themselves be regarded as symbolic instances used to manage features commonly valued by their own members.

CaKe_indivs1
Power-types are simultaneously categories and instances.

This approach is consistent as well as effective providing the semantics and identities of instances are shared by all agents concerned, e.g business and technical aspects of car rentals. Yet it falls short on both accounts when abstractions levels a set across  domains, inducing connectors with different semantics and increased complexity.

Abstraction & Sub-classes

Sub-classes often appear as a way to overcome the difficulty, as illustrated by the Hepp Research’s Vehicle Sales Ontology: despite being set at different abstraction levels, instances for cars and models are defined and identified uniformly.

CaKe_indivs2

If, in that case, the model fulfills the substitution principle for external consistency, sub-types will fall short if engineering concerns were to be taken into account because the set of individual car models could then differ depending on perspective.

Meta-classes & Stereotypes

The overuse of meta-classes and stereotypes epitomizes the escapism school of modeling. That may be understandable, if not helpful, for the former which is by nature a free pass to abstraction; less so for the latter which is supposed to go the other way toward the specialization of meta-classes according to specific profiles. It ensues that stereotypes should never be used on their own or be extended by another stereotype.

As it happens, such consistency concerns appear to be easily diluted when stereotypes are jumbled with meta constructs; e.g:

  • Abstract stereotype (a).
  • Strong (aka class) inheritance of abstract stereotype from concrete meta-class (b).
  • Weak inheritance (aka aspect) between stereotypes (c, f).
  • Meta-constraint used to map Vehicle to methodology stereotype (d).
  • Domain specific connector to stereotype (e).

CaKe_uafp

Without comprehensive and consistent semantics for instances and abstractions, individuals cannot be solidly mapped to models:

  • Individual concepts (car, horse, boat, …) have no clear mooring.
  • Actual vehicles cannot be tied to the meta-class or the abstract stereotype.
  • There is no strong inheritance tying individuals models and rental cars to an identification mechanism.

Assuming that detachment is not an option, the basis of models must be reset.

Ontological Stereotypes

Put in simple words, ontologies are meant to examine the nature and categories of existence, in general (metaphysics) or in specific contexts. As for the latter, they can be applied to individuals according to the nature of their existence (aka epistemic identity): concepts, documents, categories and aspects.

CaKe_recap

As it happens, sorting things out makes the whole paraphernalia of meta-classes and stereotypes no longer needed because the semantics of inheritance and associations is set by the nature of individuals.

With individuals solidly rooted in targeted domains, models can then fully serve their purposes.

What’s the Point

It’s worth to remind that models have to be built on purpose, which cannot be achieved without a clear understanding of context and targets. Here are some examples:

Quality arguably come first:

  • External consistency: to be checked by mapping individuals in analysis categories to big data.
  • Internal consistency: to be checked by mapping individuals in design classes to observed or simulated run-time components.
  • Acceptance: automated tests generation.

Then, individuals could be used to map models to past and future, the former for refactoring, the latter for business intelligence.

Refactoring looks to the past but is frustrated by undocumented legacy code. Combined with machine learning, individuals could help to bridge the gap between code and models.

Business Intelligence looks the other way as it is meant to map hypothetical business objects and behaviors to the structures and semantics already managed by information systems. As in reversal of legacy benefits, individuals could provide cues to new business meanings.

Finally, maturity assessment and optimization of enterprise processes fully depend on the reliability of their basis.

Further Reading