Stories In Logosphere

As championed by a brave writer, should we see the Web as a crib for born again narratives, or as a crypt for redundant texts.

melikOhanian_letters
Crib or Crypt (Melik Ohanian)

Once Upon A Time

Borrowing from Einstein, “the only reason for time is so that everything doesn’t happen at once.” That befits narratives: whatever the tale or the way it is conveyed, stories take time. Even if nothing happens, a story must be spelt in tempo and can only be listened to or read one step at a time.

In So Many Words

Stories have been told before being written, which is why their fabric is made of words, and their motifs weaved by natural languages. So, even if illustrations may adorn printed narratives, the magic of stories comes from the music of their words.

A Will To Believe

To enjoy a story, listeners or readers are to detach their mind from what they believe about reality, replacing dependable and well-worn representations with new and untested ones, however shaky or preposterous they may be; and that has to be done through an act of will.

Stories are make-beliefs: as with art in general, their magic depends on the suspension of disbelief. But suspension is not abolition; while deeply submerged in stories, listeners and readers maintain some inward track to the beliefs they left before diving; wandering a cognitive fold between surface truths and submarine untruths, they seem to rely on a secure if invisible tether to the reality they know. On that account, the possibility of an alternative reality is to transform a comforting fold into a menacing abyss, dissolving their lifeline to beliefs. That could happen to stories told through the web.

Stories & Medium

Assuming time rendering, stories were not supposed to be affected by medium; that is, until McLuhan’s suggestion of medium taking over messages. Half a century later internet and the Web are bringing that foreboding in earnest by melting texts into multimedia documents.

Tweets and Short Message Services (SMS) offer a perfect illustration of the fading of text-driven communication, evolving from concise (160 characters) text-messaging  to video-sharing.

That didn’t happen by chance but reflects the intrinsic visual nature of web contents, with dire consequence for texts: once lording it over entourages of media, they are being overthrown and reduced to simple attachments, just a peg above fac-simile. But then, demoting texts to strings of characters makes natural languages redundant, to be replaced by a web Esperanto.

Web Semantic Silos

With medium taking over messages, and texts downgraded to attachments, natural languages may lose their primacy for stories conveyed through the web, soon to be replaced by the so-called “semantic web”, supposedly a lingua franca encompassing the whole of internet contents.

As epitomized by the Web Ontology Language (OWL), the semantic web is based on a representation scheme made of two kinds of nodes respectively for concepts (squares) and conceptual relations (circles).

Scratch_WarPeace00
Semantic graphs (aka conceptual networks) combine knowledge representation (blue, left) and domain specific semantics (green, center & right)

Concept nodes are meant to represent categories specific to domains (green, right); that tallies with the lexical level of natural languages.

Connection nodes are used to define two types of associations:

  • Semantically neutral constructs to be applied uniformly across domains; that tallies with the syntactic level of natural languages (blue, left).
  • Domain specific relationships between concepts; that tallies with the semantic level of natural languages (green, center).

The mingle of generic (syntactic) and specific (semantic) connectors induces a redundant complexity which grows exponentially when different domains are to be combined, due to overlapping semantics. Natural languages typically use pragmatics to deal with the issue, but since pragmatics scale poorly with exponential complexity, they are of limited use for semantic web; that confines its effectiveness to silos of domain specific knowledge.

Scratch_silos.jpg
Natural Language Pragmatics As Bridges Across Domain Specific Silos

But semantic silos are probably not the best nurturing ground for stories.

Stories In Cobwebs

Taking for granted that text, time, and suspension of disbelief are the pillars of stories, their future on the web looks gloomy:

  • Texts have no status of their own on the web, but only appear as part of documents, a media among others.
  • Stories can bypass web practice by being retrieved before being read as texts or viewed as movies; but whenever they are “browsed” their intrinsic time-frame and tempo are shattered, and so is their music.
  • If lying can be seen as an inborn human cognitive ability, it cannot be separated from its role in direct social communication; such interactive background should also account for the transient beliefs in fictional stories. But lies detached from a live context and planted on the web are different beasts, standing on their own and bereft of any truth currency that could separate actual lies from fictional ones.

That depressing perspective is borne out by the tools supposed to give a new edge to text processing:

Hyper-links are part and parcel of internet original text processing. But as far and long as stories go, introducing links (hardwired or generated) is to hand narrative threads over to readers, and by so transforming them into “entertextment” consumers.

Machine learning can do wonders mining explicit and implicit meanings from the whole of past and present written and even spoken discourses. But digging stories out is more about anthropology or literary criticism than about creative writing.

As for the semantic web, it may work as a cobweb: by getting rid of pragmatics, deliberately or otherwise, it disables narratives by disengaging them from their contexts, cutting them out in one stroke from their original meaning, tempo, and social currency.

The Deconstruction of Stories

Curiously, what the web may do to stories seems to reenact a philosophical project which gained some favor in Europe during the second half of the last century. To begin with, the  deconstruction  philosophy was rooted in literary criticism, and its objective was to break the apparent homogeneity of narratives in order to examine the political, social, or ideological factors at play behind. Soon enough, a core of upholders took aim at broader philosophical ambitions, using deconstruction to deny even the possibility of a truth currency.

With the hindsight on initial and ultimate purposes of the deconstruction project, the web and its semantic cobweb may be seen as the stories nemesis.

Further Reading

External Links

GDPR Ontological Primer

Preamble

European Union’s General Data Protection Regulation (GDPR), to come into effect  this month, is a seminal and momentous milestone for data privacy .

Nothing Personal (Arthur Szyk)

Yet, as reported by Reuters correspondents, European enterprises and regulators are not ready; more worryingly, few (except consultants) are confident about GDPR direction.

Misgivings and uncertainties should come as no surprise considering GDPR’s two innate challenges:

  • Regulating privacy rights represents a very ambitious leap into a digital space now at the core of corporate business strategies.
  • Compliance will not be put under a single authority but be overseen by an assortment of national and regional authorities across the European Union.

On that account, ontologies appear as the best (if not the only) conceptual approach able to bring contexts (EU nations), concerns (business vs privacy), and enterprises (organization and systems) into a shared framework.

A workbench built with the Caminao ontological kernel is meant to explore the scope and benefits of that approach, with a beta version (Protégé/OWL 2) available for comments on the Stanford/Protégé portal using the link: Caminao Ontological Kernel (CaKe_GDPR).

Enterprise Architectures & Regulations

Compared to domain specific regulations, GDPR  is a governance-oriented regulation set across business concerns and enterprise organization; but unlike similarly oriented ones like accounting, GDPR is aiming at the nexus of business competition, namely the processing of data into information and knowledge. With such a strategic stake, compliance is bound to become a game-changer cutting across business intelligence, production systems, and decision-making. Hence the need for an integrated, comprehensive, and consistent approach to the different dimensions involved:

  • Concepts upholding businesses, organizations, and regulations.
  • Documentation with regard to contexts and statutory basis.
  • Regulatory options and compliance assessments
  • Enterprise systems architecture and operations

Moreover, as for most projects affecting enterprise architectures, carrying through GDPR compliance is to involve continuous, deep, and wide ranging changes that will have to be brought off without affecting overall enterprise performances.

Ontologies arguably provide a conclusive solution to the problem, if only because there is no other way to bring code, models, documents, and concepts under a single roof. That could be achieved by using ontologies profiles to frame GDPR categories along enterprise architectures models and components.

CakeGDPR_00.jpg
Basic GDPR categories and concepts (black color) as framed by the Caminao Kernel

Compliance implementation could then be carried out iteratively across four perspectives:

  • Personal data and managed information
  • Lawfulness of activities
  • Time and Events
  • Actors and organization.

Data & Information

To begin with, GDPR defines ‘personal data’ as “any information relating to an identified or identifiable natural person (‘data subject’)”. Insofar as logic is concerned that definition implies an equivalence between ‘data’ and ‘information’, an assumption clearly challenged by the onslaught of big data: if proofs were needed, the Cambridge Analytica episode demonstrates how easy raw data can become a personal affair. Hence the need to keep an ontological level of indirection between regulatory intents and the actual semantics of data as managed by information systems.

CakeGDPR_data
Managing the ontological gap between regulatory understandings and compliance footprints

Once lexical ambiguities set apart, the question is not so much about the data bases of well identified records than about the flows of data continuously processed: if identities and ownership are usually set upfront by business processes, attributions may have to be credited to enterprises know-how if and when carried out through data analytics.

Given that the distinctions are neither uniform, exclusive or final, ontologies will be needed to keep tabs on moves and motives. OWL 2 constructs (cf annex) could also help, first to map GDPR categories to relevant information managed by systems, second to sort out natural data from nurtured knowledge.

Activities & Purposes

Given footprints of personal data, the objective is to ensure the transparency and traceability of the processing activities subject to compliance.

Setting apart (see below for events) specific add-ons for notification and personal accesses,  charting compliance footprints is to be a complex endeavor: as there is no reason to assume some innate alignment of intended (regulation) and actual (enterprise) definitions, deciding where and when compliance should apply potentially calls for a review of all processing activities.

After taking into account the nature of activities, their lawfulness is to be determined by contexts (‘purpose limitation’ and ‘data minimization’) and time-frames (‘accuracy’ and ‘storage limitation’). And since lawfulness is meant to be transitive, a comprehensive map of the GDPR footprint is to rely on the logical traceability and transparency of the whole information systems, independently of GDPR.

That is arguably a challenging long-term endeavor, all the more so given that some kind of Chinese Wall has to be maintained around enterprise strategies, know-how, and operations. It ensues that an ontological level of indirection is again necessary between regulatory intents and effective processing activities.

Along that reasoning compliance categories, defined on their own, are first mapped to categories of functionalities (e.g authorization) or models (e.g use cases).

CakeGDPR_activ1
Compliance categories are associated upfront to categories of functionalities (e.g authorization) or models (e.g use cases).

Then, actual activities (e.g “rateCustomerCredit”) can be progressively brought into the compliance fold, either with direct associations with regulations or indirectly through associated models (e.g “ucRateCustomerCredit” use case).

CakeGDPR_activ2
Compliance as carried out through Use Case

The compliance backbone can be fleshed out using OWL 2 mechanisms (see annex) in order to:

  • Clarify the logical or functional dependencies between processing activities subject to compliance.
  • Qualify their lawfulness.
  • Draw equivalence, logical, or functional links between compliance alternatives.

That is to deal with the functional compliance of processing activities; but the most far-reaching impact of the regulation may come from the way time and events are taken into account.

Time & Events

As noted above, time is what makes the difference between data and information, and setting rules for notification makes that difference lawful. Moreover, by adding time constraints to the notifications of changes in personal data, regulators put systems’ internal events on the same standing as external ones. That apparently incidental departure echoes the immersion of systems into digitized business environments, making all time-scales equal whatever their nature. Such flattening is to induce crucial consequences for enterprise architectures.

That shift together with the regulatory intent are best taken into account by modeling events as changes in expectations, physical objects, processes execution, and symbolic objects, with personal data change belonging to the latter.

Gdpr events
Mapping internal (symbolic) and external (actual) events is a critical element of GDPR compliance

Putting apart events specific to GDPR (e.g data breaches), compliance with regard to accuracy and storage limitation regulations will require that all events affecting personal data:

  • Are set in time-frames, possibly overlapping.
  • Have notification constraints properly documented.
  • Have likelihood and costs of potential risks assessed.

As with data and activities, OWL 2 constructs are to be used to qualify compliance requirements.

Actors & Organization

GDPR introduces two specific categories of actors (aka roles): one (data subject) for natural persons, and one for actors set by organizations, either specifically for GDPR assignment, or by delegation to already defined actors.

Gdpr actors
GDPR roles can be set specifically or delegated

OWL 2 can then be used to detail how regulatory roles can be delegated to existing ones, enabling a smooth transition and a dynamic adjustment of enterprise organization with regulatory compliance.

It must be stressed that the semantic distinction between identified agents (e.g natural persons) and the roles (aka UML actors) they play in processes is of particular importance for GDPR compliance because who (or even what) is behind an actor interacting with a system is to remain unknown to the system until the actor can be authentically identified. If that ontological lapse is overlooked there is no way to define and deal with security, confidentiality or privacy regulations.

Conclusion

The use of ontologies brings clear benefits for regulators, enterprise governance, and systems architects.

Without shared conceptual guidelines chances are for the European regulatory orchestra to get lost in squabbles about minutiae before sliding into cacophony.

With regard to governance, bringing systems and regulations into a common conceptual framework is to enable clear and consistent compliance strategies and policies, as well as smooth learning curves.

With regard to architects, ontology-based compliance is to bring cross benefits and externalities, e.g from improved traceability and transparency of systems and applications.

Annex A: Mapping Regulations to Models (sample)

To begin with, OWL 2 can be used to map GDPR categories to relevant resources as managed by information systems:

  • Equivalence: GDPR and enterprise definitions coincide.
  • Logical intersection, union, complement: GDPR categories defined by, respectively, a cross, merge, or difference of enterprise definitions.
  • Qualified association between GDPR and enterprise categories.

Assuming the categories properly identified, the language can then be employed to define the sets of regulated instances:

  • Logical property restrictions, using existential and universal quantification.
  • Functional property restrictions, using joints on attributes values.

Other constructs, e.g cardinality or enumerations, could also be used for specific regulatory constraints.

Finally, some OWL 2 built-in mechanisms can significantly improve the assessment of alternative compliance policies by expounding regulations with regard to:

  • Equivalence, overlap, or complementarity.
  • Symmetry or asymmetry.
  • Transitivity
  • etc.

Annex B: Mapping Regulations to Capabilities

GDPR can be mapped to systems capabilities using well established Zachman’s taxonomy set by crossing architectures functionalities (Who,What,How, Where, When) and layers (business and organization), systems (logical structures and functionalities), and platforms (technologies).

Rules_GDPR
Regulatory Compliance vs Architectures Capabilities

These layers can be extended as to apply uniformly across external ontologies, from well-defined (e.g regulations) to fuzzy (e.g business prospects or new technologies) ones, e.g:

Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).

Such mapping is to significantly enhance the transparency of regulatory policies.

Further Reading

External Links

Focus: Requirements Reuse

Preamble

Requirements is what to feed engineering processes. As such they are to be presented under a wide range of forms, and nothing should be assumed upfront about forms or semantics.

 

What is to be reused: Sketches or Models  ? (John Devlin)

Answering the question of reuse therefore depends on what is to be reused, and for what purpose.

Documentation vs Reuse

Until some analysis can be carried out, requirements are best seen as documents;  whether such documents are to be ephemeral or managed would be decided depending on method (agile or phased), contents (business, supporting systems, implementation, or quality of services), or purpose (e.g governance, regulations, etc).

What is to be reused.

Setting apart external conditions, requirements documentation could be justified by:

  • Traceability of decision-making linking initial requests with actual implementation.
  • Acceptance.
  • Maintenance of deliverables during their life-cycle.

Depending on development approaches, documentation could limited to archives (agile development models) or managed as intermediate products (phased development models). In the latter case reuse would entail some formatting of requirements.

The Cases for Requirements Reuse

Assuming that requirements have been properly formatted, e.g as analysis models (with technical ones managed internally at system level), reuse could be justified by changes in business, functional, or quality of services requirements:

  • Business processes are meant to change with opportunities. With requirements available as analysis models, changes would be more easily managed (a) if they could be fine-grained. Business rules are a clear example, but that could also be the case for new features added to business objects.
  • Functional requirements may change even without change of business ones, e.g if new channels and users are introduced addressing existing business functions. In that case reusable business requirements (b) would dispense with a repeat of business analysis.
  • Finally, quality of service could be affected by operational changes like localization, number of users, volumes, or frequency. Adjusting architecture capabilities would be much easier with functional (c) and business (d) requirements properly documented as analysis models.
Cases for Reuse

Along that perspective, requirements reuse appears to revolve around two pivots, documents and analysis models. Ontologies could be used to bind them.

Requirements & Ontologies

Reusing artifacts means using them in contexts or for purposes different of native ones. That may come by design, when specifications can anticipate on shared concerns, or as an afterthought, when initially unexpected similarities are identified later on. In any case, reuse policies have to overcome a twofold difficulty:

  • Visibility: business and functional analysts must be made aware of potential reuse without having to spend too much time on research.
  • Overheads: ensuring transparency, traceability, and consistency checks on requirements (documents or analysis models) cannot be achieved without costs.

Ontologies could help to achieve greater visibility with acceptable overheads by framing requirements with regard to nature (documents or models) and context:

With regard to nature, the critical distinction is between document management and model based engineering systems. When framed as ontologies, the former is to be implemented as thesaurus targeting terms and documents, the latter as ontologies targeting categories specific to organizations and business domains.

Documents, models, and capabilities should be managed separately

With regard to context the objective should be to manage reusable requirements depending on the kind of jurisdiction and stability of categories, e.g:

  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to accord.
  • Corporate: Defined by enterprises, changes subject to internal decision-making.
  • Social: Defined by usage, volatile, continuous and informal changes.
  • Personal: Customary, defined by named individuals (e.g research paper).
Combining contexts of reuse with architectures layers (enterprise, systems, platforms) and capabilities (Who,What,How, Where, When).

Combined with artificial intelligence, ontology archetypes could crucially extend the benefits of requirements reuse, notably through the impact of deep learning for visibility.

On a broader perspective requirements should be seen as a source of knowledge, and their reuse managed accordingly.

Further Reading

Healthcare: Tracks & Stakes

Preamble

Healthcare represents at least a tenth of developed country’s GDP, with demography pushing to higher levels year after year. In principle technology could drive costs in both directions; in practice it has worked like a ratchet: upside, innovations are extending the scope of expensive treatments, downside, institutional and regulatory constraints have hamstrung the necessary mutations of organizations and processes.

Health Care Personal Assistant (Kerry James Marshall)

As a result, attempts to spread technology benefits across healthcare activities have dwindle or melt away, even when buttressed by major players like Google or Microsoft.

But built up pressures on budgets combined with social transformations have undermined bureaucratic barriers and incumbents’ estates, springing up initiatives from all corners: pharmaceutical giants, technology startups, healthcare providers, insurers, and of course major IT companies.

Yet the wide range of players’ fields and starting lines may be misleading, incumbents or newcomers are well aware of what the race is about: whatever the number of initial track lanes, they are to fade away after a few laps, spurring the front-runners to cover the whole track, alone or through partnerships. As a consequence, winning strategies would have to be supported by a comprehensive and coherent understanding of all healthcare aspects and issues, which can be best achieved with ontologies.

Ontologies vs Models

Ontologies are symbolic constructs (epitomized by conceptual graphs made of nodes and connectors) whose purpose is to make sense of a domain of discourse:

  1. Ontologies are made of categories of things, beings, or phenomena; as such they may range from simple catalogs to philosophical doctrines.
  2. Ontologies are driven by cognitive (i.e non empirical) purposes, namely the validity and consistency of symbolic representations.
  3. Ontologies are meant to be directed at specific domains of concerns, whatever they can be: politics, religion, business, astrology, etc.

That makes ontologies a special case of uncommitted models: like models they are set on contexts and concerns; but contrary to models ontologies’ concerns are detached from actual purposes. That is precisely what is expected from a healthcare conceptual framework.

Contexts & Business Domains

Healthcare issues are set across too many domains to be effectively fathomed, not to mention followed as they change. Notwithstanding, global players must anchor their strategies to different institutional contexts, and frame their policies as to make them transparent and attractive to others players. Such all-inclusive frameworks could be built from ontologies profiled with regard to the governance and stability of contexts:

  • Institutional: Regulatory authority, steady, changes subject to established procedures.
  • Professional: Agreed upon between parties, steady, changes subject to accord.
  • Corporate: Defined by enterprises, changes subject to internal decision-making.
  • Social: Defined by usage, volatile, continuous and informal changes.
  • Personal: Customary, defined by named individuals (e.g research paper).

Ontologies set along that taxonomy of contexts could then be refined as to target enterprise architecture layers: enterprise, systems, platforms, e.g:

A sample of Healthcare profiled ontologies

Depending on the scope and nature of partnerships, ontologies could be further detailed as to encompass architectures capabilities: Who, What, How, Where, When. 

Concerns & Architectures Capabilities

As pointed above, a key success factor for major players would be their ability to federate initiatives and undertakings of both incumbents and newcomers, within or without partnerships. That can be best achieved with enterprise architectures aligned with an all-inclusive yet open framework, and for that purpose the Zachman taxonomy would be the option of choice. The corresponding enterprise architecture capabilities (Who,What, How, Where, When) could then be uniformly applied to contexts and concerns:

  • Internally across architecture layers for enterprise (business and organization), systems (logical structures and functionalities), and platforms (technologies).
  • Externally across context-based ontologies as proposed above.

The nexus between environments (contexts) and enterprises (concerns) ontologies could then be organised according to the epistemic nature of items: terms, documents, symbolic representations (aka surrogates), or business objects and phenomena.

Mapping knowledge to architectures capabilities

That would outline four basic ontological archetypes that may or may not be combined:

  • Thesaurus: ontologies covering terms, concepts.
  • Document Management: thesaurus and documents.
  • Organization and Business: ontologies pertaining to enterprise organization and business processes.
  • Engineering: ontologies pertaining to the symbolic representation (aka surrogates) of organizations, businesses, and systems.

Global healthcare players could then build federating frameworks by combining domain and architecture driven ontologies, e.g:

Building federating frameworks with modular ontologies designed on purpose.

As a concluding remark, it must be reminded that the objective is to federate the activities and systems of healthcare players without interfering with the design of their business processes or supporting systems. Hence the importance of the distinction between ontologies and models introduced above which would act as a guaranty that concerns are not mixed up insofar as ontologies remain uncommitted models.

Further Reading

External Links

Agile Collaboration & Enterprise Creativity

Open-plan offices and social networks are often seen as significant factors of collaboration and innovation, breeding and nurturing the creativity of knowledge workers, weaving their ideas into webs of truths, and molding their minds into some collective intelligence.


Trust & Communication (Juan Munoz)

Yet, as creativity comes with agility, knowledge workflows should give brains enough breathing space lest they get more pressure than pasture.

Collaboration & Thinking Flows

Collaboration is a means to an end. To be of any use exchanges have to be fed with renewed ideas and assumptions, triggering arguments and adjustments, and opening new perspectives. If not they may burn themselves out with hollow considerations blurring clues and expectations, clogging the channels, and finally stemming the thinking flows.

Taking example from lean manufacturing, the first objective should be to streamline knowledge workflows as to eliminate swirling pools of squabbles, drain stagnant puddles of stale thoughts, and gear collaboration to flowing knowledge streams. As illustrated by flood irrigation, the first step is to identify basin levels.

Dunbar Numbers & Collaboration Basins

Studying the grooming habits of social primates, psychologist Robin Dunbar came to the conclusion that the size of social circles that individuals of a living species can maintain is set by the size of brain’s neocortex. Further studies have confirmed Dunbar’s findings, with the corresponding sizes for humans set around 10 for trusted personal groups and 150 for untried social ones. As it happens, and not by chance, those numbers seem to coincide with actual observations: the former for personal and direct collaboration, the latter for social and mediated collaboration.

Based on that understanding, the objective would be to organize knowledge workflows across two primary basins:

  • On-site and face-to-face collaboration with trusted co-workers. Corresponding interactions would be driven by personal dispositions and attitudes.
  • On-line and networked collaboration with workers, trusted or otherwise. Corresponding interactions would be based on shared interests and past exchanges.

Knowledge Workflows

The aim of knowledge workflows is to process data into information and put it to use. That is to be achieved by combining different kinds of tasks, in particular:

  • Data and information management: build the symbolic descriptions of contexts, concerns, and means.
  • Objectives management: based on a set of symbolic descriptions, identify and refine opportunities together with the ways to realize them.
  • Tasks management: allocate rights and responsibilities across organizations and collaboration frames, public and shallow or personal and deep.
  • Flows management: monitor and manage actual flows, publish arguments and propositions, consolidate decisions, …

Taking into account constraints and dependencies between the tasks, the aims would be to balance creativity and automation while eliminating superfluous intermediate products (like documents or models) or activities (e.g unfocused meetings).

With regard to dependencies, KM tasks are often intertwined and cannot be carried out sequentially; moreover, as illustrated by the impact of “creative accounting” on accounted activities, their overlapping is not frozen but subject to feedback, changes and adjustments.

With regard to automation, three groups are to be considered: the first requires only raw processing power and can be fully automated; the second also involves some intelligence that may be provided by smart systems; and the third calls for decision-making that can only be done by human agents entitled by the organization.

At first sight some lessons could be drawn from lean manufacturing, yet, since knowledge processes are not subject to hardware constraints, agile approaches should provide a more informative reference.

Iterative Knowledge Processing

A simple preliminary step is to check the applicability of agile principles by replacing “software” by “knowledge”. Assuming that ground is secured, the core undertaking is to consider what would become of cycles and iterations when applied to knowledge processing:

  • Cycle invariants: tasks would be iterated on given sets of symbolic descriptions applied to the state of affairs (contexts, concerns, and means).
  • Iterations content: based on those descriptions data would be processed into information, changes would be monitored, and possibilities explored.
  • Exit condition: cycles would complete with decisions committing changes in the state of affairs that would also entail adjustments or changes in symbolic descriptions.

That scheme meets three of the basic tenets of the agile paradigm, i.e open scope (unknowns cannot be set in advance), continuity of delivery (invariants are defined and managed by knowledge workers), and users in driving seats (through exit conditions). Yet it still doesn’t deal with creativity and the benefits of collaboration for knowledge workers.

Thinking Space & Pace

The scope of creativity in processes is neatly circumscribed by the nature of flows, i.e the possibility to insert knowledge during the processing: external for material flows (e.g in manufacturing), internal for symbolic flows (e.g in software engineering and knowledge processing).

Yet, whereas both software engineering and knowledge processes come with some built-in capability to redefined their symbolic flows on-the-fly, they don’t grant the same room to creativity. Contrary to software engineering projects which have to close their perspectives on the delivery of working products, knowledge processes are meant to keep them open to new understandings and opportunities. For the former creativity is the means to an end, for the latter it’s the end in itself, with collaboration as means.

Such opposite perspectives have direct consequences for two basic agile collaboration mechanisms: backlog and time-boxing:

  • Backlogs are used to structure and manage the space under exploration. But contrary to software processes whose space is focused and structured by users’ needs, knowledge processes are supposed to play on workers’ creativity to expand and redefine the range under consideration.
  • Time-boxes are used to synchronize tasks. But with creativity entering the fray, neither space granularity or thinking pace can be set in advance and coerced into single-sized boxes. In that case individuals must remain in full control of the contents and stride of their thinking streams.

It ensues that when creativity is the primary success factor standard agile collaboration mechanisms are falling short and intelligent collaboration schemes are to be introduced.

Creativity & Collaboration Tiers

The synchronization of creative activities has to deal with conflicting objectives:

  • On one hand the mental maps of knowledge workers and the stream of their thoughts have to be dynamically aligned.
  • On the other hand unsolicited face-to-face interactions or instant communications may significantly impair the course of creative thinking.

When activities, e.g software engineering, can be streamlined towards the delivery of clearly defined outcomes, backlogs and time-boxes can be used to harness workers’ creativity. When that’s not the case more sophisticated collaboration mechanisms are needed.

Assuming that mediated collaboration has a limited impact on thinking creativity (emails don’t have to be answered, or even presented, instantly), the objective is to steer knowledge workflows across a two-tiered collaboration framework: one personal and direct between knowledge workers, the other social and mediated through enterprise or institutional networks.

On the first tier knowledge workers would manage their thinking flows (content and tempo) independently, initiating or accepting personal collaboration (either through physical contact or some kind of instant messaging) depending on their respective “state of mind”.

The second tier would be for social collaboration and would be expected to replace backlogs and time-boxing. Proceeding from the first to the second tier would be conditioned by workers’ needs and expectations, triggered on their own initiative or following prompts.

From Personal to Collective Thinking

The challenging issue is obviously to define and implement the mechanisms governing the exchanges between collaboration tiers, e.g:

  • How to keep tabs on topics and contents to be safeguarded.
  • How to mediate (i.e filter and time) the solicitations and contribution issued by the social tier.
  • How to assess the solicitations and contribution issued by individuals.
  • How to assess and manage knowledge deemed to remain proprietary.
  • How to identify and manage knowledge workers personal and social circles.

Whereas such issues are customary tackled by various AI systems (knowledge management, decision-making, multi-players games, etc), taken as a whole they bring up the question of the relationship between personal and collective thinking, and as a corollary, the role of organization in nurturing corporate innovation.

Conclusion: Collaboration Spaces vs Panopticon

As illustrated by the rising of futuristic headquarters, leading technology firms have been trying to tackle these issues by redefining internal architecture as collaboration spaces. Compared to traditional open spaces, such approaches try to fuse physical and digital spaces into overlapping layers of collaboration spaces, using artificial intelligence to harness cooperation.

Yet, lest uniform and comprehensive transparency brings the worrying shadow of a panopticon within which everyone can be unknowingly observed, working spaces have to be designed as to enhance collaboration without trespassing on privacy.

That could be achieved with a layered transparency set along the nature of collaboration:

  • Immediate and personal: working cells regrouping 5 to 10 workstations earmarked for a task and used indifferently by teams members.
  • Delayed and personal: open physical spaces accommodating working cells, with instant messaging and geo-localization; spaces are hinged on domains and focused on shared knowledge.
  • On-line and networked: digital spaces merging physical spaces and organizational structures.

That mix of physical and virtual spaces could be dynamically redefined depending on activities, projects, location, and organisation.

Further Readings

External Links

EA Documentation: Taking Words for Systems

In so many words

Given the clear-cut and unambiguous nature of software, how to explain the plethora of  “standard” definitions pertaining to systems, not to mention enterprises, architectures ?

Documents and architectures, which grows on the other (Gilles Barbier).
Documents and Systems: which ones nurture the others (Gilles Barbier).

Tentative answers can be found with reference to the core functions documents are meant to support: instrument of governance, medium of exchange, and content storage.

Instrument of Governance: the letter of the law

The primary role of documents is to support the continuity of corporate identity and activities with regard to their regulatory and business environments. Along that perspective documents are to receive legal tender for the definitions of parties (collective or individuals), roles, and contracts. Such documents are meant to support the letter of the law, whether set at government, industry, or corporate level. When set at corporate level that letter may be used to assess the capability and maturity of architectures, organizations, and processes. Whatever the level, and given their role for legal tender or assessment, those documents have to rely on formal textual definitions, possibly supplemented with models.

Medium of Exchange: the spirit of the law

Independently of their formal role, documents are used as medium of exchange, across corporate entities as well as internally between their organizational units. When freed from legal or governance duties, such documents don’t have to carry authorized or frozen interpretations and assorted meanings can be discussed and consolidated in line with the spirit of the law. That makes room for model-based documents standing on their own, with textual definitions possibly set in the background. Given the importance of direct discussions in the interpretation of their contents, documents used as medium of (immediate) exchange should not be confused with those used as means of storage (exchange along time).

Means of Storage: letter only

Whatever their customary functions, documents can be used to store contents to be reinstated at a later stage. In that case, and contrary to direct (aka immediate) exchange, interpretations cannot be consolidated through discussion but have to stand on the letter of the documents themselves. When set by regulatory or organizational processes, canonical interpretations can be retrieved from primary contexts, concerns, or pragmatics. But things can be more problematic when storage is performed for its own purpose, without formal reference context. That can be illustrated by legacy applications with binary code can be accompanied by self-documented source code, source with documentation, source with requirements, generated source with models, etc.

Documentation and Enterprise Architecture

Assuming that the governance of structured social organizations must be supported by comprehensive documentation, documents must be seen as a necessary and intrinsic component of enterprise architectures and their design should be aligned on concerns and capabilities.

As noted above, each of the basic functionalities comes with specific constraints; as a consequence a sound documentation policy should not mix functionalities. On that basis, documents should be defined by mapping purposes with users across enterprise architecture layers:

  • With regard to corporate environment, documentation requirements are set by legal constraints, directly (regulations and contracts) or indirectly (customary framework for transactions, traceability and audit).
  • With regard to organization, documents have to met two different objectives. As a medium of exchange they are meant to support the collaboration between organizational units, both at business level (processes) and across architecture levels. As an instrument of governance they are used to assess architecture capabilities and processes performances. Documents supporting those objectives are best kept separate if negative side effects are to be avoided.
  • With regard to systems functionalities, documents can be introduced for procurements (governance), development (exchange), and change (storage).
  • Within systems, the objective is to support operational deployment and maintenance of software components.

Documents’ purposes and users
Documents’ purposes and users

The next step will be to integrate documents pertaining to actual environments and organization (brown background) with those targeting symbolic artifacts (blue background).

EmergA_ActSmb
Models are used to describe actual or symbolic objects and behaviors

That could be achieved with MBE/MDA approaches.

Further readings

 

Focus: Rules & Architecture

Preamble

Rules can be seen as the glue holding together business, organization, and systems, and that may be a challenge for enterprise architects when changes are to be managed according to different concerns and different time-scales. Hence the importance of untangling rules upfront when requirements are captured and analysed.

devlin10.jpg
How to outline the architectural footprint of rules (John Devlin)

Primary Taxonomy

As far as enterprise architecture is concerned, rules can be about:

  • Business and regulatory environments.
  • Enterprise objectives and organization.
  • Business processes and supporting systems.

That classification can be mapped to a logical one:

  • Rules set in business or regulatory environments are said to be deontic as they are to be met independently of enterprise governance. They must be enforced by symbolic representations if enterprise systems are to be aligned with environments.
  • Rules associated with objectives, organization, processes or systems are said to be alethic (aka modal) as they refer to possible, necessary or contingent conditions as defined by enterprise governance. They are to be directly applied to symbolic representations.

Whereas both are to be supported by systems, the loci will differ: system boundaries for deontic rules (coupling between environment and systems), system components for alethic ones (continuity and consistency of symbolic representations). Given the architectural consequences, rules should be organized depending on triggering (actual or symbolic) and scope (environment or enterprise):

  • Actual deontic rules are triggered by actual external events that must be processed synchronously.
  • Symbolic deontic rules are triggered by external events that may be processed asynchronously.
  • Actual alethic rules are triggered by business processes and must be processed synchronously.
  • Symbolic alethic rules are triggered by business processes and can be processed asynchronously.

Rules should be classified upfront with regard to triggering (actual or symbolic) and scope (environment or enterprise)

Footprint

The footprint of a rule is made of the categories of facts to be considered (aka rule domain), and categories of facts possibly affected (aka rule co-domain).

As far as systems are concerned, the first thing to do is to distinguish between actual contexts and symbolic representations. A naive understanding would assume rules to belong to either actual or symbolic realms. Given that the objective of modeling is to decide how the former should be represented by the latter, some grey imprints to be expected and dealt with using three categories of rules, one for each realm and the third set across the divide:

  • Rules targeting actual contexts. They can be checked through sensors or applied by actuators. Since rules enforcement cannot be guaranteed on non symbolic artifacts, some rules will have to monitor infringements and proscribed configurations. Example: “Cars should be checked on return from each rental, and on transfer between branches.”
  • Rules targeting symbolic representations. Their enforcement is supposedly under the full control of system components. Example: “A car with accumulated mileage greater than 5000 since its last service must be scheduled for service.”
  • Rules defining how changes in actual contexts should impact symbolic representations: what is to be considered, where it should be observed, when it should be recorded, how it should be processed, who is to be authorized. Example: ” Customers’ requests at branches for cars of particular models should be consolidated every day.”

RulesCapabs
Rules & Capabilities

That analysis should be carried out as soon as possible because rules set on the divide will determine the impact of requirements on architecture capabilities.

Semantics and Syntax

Rules footprints are charted by domains (what is to be considered) and co-domains (what is to be affected). Since footprints are defined by requirements semantics the outcome shouldn’t be contingent on formats.

From an architecture perspective the critical distinction is between homogeneous and heterogeneous rules, the former with footprint on the same side of the actual/symbolic divide, the latter with a footprint set across.

RulesCapabsH
Homogeneous vs Heterogeneous footprints

Contrary to footprints, the shape given to rules (aka format, aka syntax,) is to affect their execution. Assuming homogeneous footprints, four basic blueprints are available depending on the way domains (categories of instances to be valued) and co-domains (categories of instances possibly affected) are combined:

  • Partitions are expressions used to classify facts of a given category.
  • Constraints (backward rules) are conditions to be checked on facts: [domain].
  • Pull rules (static forward) are expressions used to modify facts: co-domain =  [domain].
  • Push rules (dynamic forward) are expressions used to trigger the modification of facts: [domain]  > co-domain.

Pull vs Push Rule Management

Anchors & Granularity

In principle, rules targeting different categories of facts are nothing more than well-formed expressions combining homogeneous ones. In practice, because they mix different kinds of artifacts, the way they are built is bound to significantly bear on architecture capabilities.

Systems are tied to environments by anchors, i.e objects and processes whose identity and consistency must be maintained during their life-cycle. Rules should therefore be attached to anchors’ facets as to obtain as fine-grained footprints as possible:

Anchors’ facets

  • Features: domain and co-domain are limited to attributes or operations.
  • Object footprint: domain and co-domain are set within the limit of a uniquely identified instance (#), including composites and aggregates.
  • Connections: domain and co-domain are set by the connections between instances identified independently.
  • Collections: domain and co-domain are set between sets of instances and individuals ones, including subsets defined by partitions.
  • Containers: domain and co-domain are set for whole systems.

While minimizing the scope of simple (homogeneous) rules is arguably a straightforward routine, alternative options may have to be considered for the transformation of joint (heterogeneous) statements, e.g when rules about functional dependencies may be attached either to (1) persistent representation of objects and associations or, (2) business applications.

Heterogeneous (joint) Footprints

Footprints set across different categories will usually leave room for alternative modeling options affecting the way rules will be executed, and therefore bearing differently on architecture capabilities.

Basic alternatives can be defined according to requirements taxonomy:

  • Business requirements: rules set at enterprise level that can be managed independently of the architecture.
  • System functionalities: rules set at system level whose support depends on architecture capabilities.
  • Quality of service: rules set at system level whose support depends on functional and technical architectures.
  • Operational constraints: rules set at platform level whose support depends on technical capabilities.

Rules do not necessarily fit into clear requirements taxonomy

While that classification may work fine for homogeneous rules (a), it may fall short for mixed ones, functional (b) or not (c). For instance:

  • “Gold Customers with requests for cars of particular models should be given an immediate answer.”
  • “Technical problems affecting security on checked cars must be notified immediately.”

As requirements go, rules interweaving business, functional, and non functional requirements are routine and their transformation should reflect how priorities are to be sorted out.

Moreover, if rule refactoring is to be carried out, there will be more than syntax and semantics to consider because almost every requirement can be expressed as a rule, often with alternative options. As a corollary, the modeling policies governing the making of rules should be set explicitly.

Sorting Out Mixed Rules

Taking into account that functional requirements describe how systems are meant to support  business processes, some rules are bound to mix functional and business concerns. When that’s the case, preferences will have to be set with regard to:

  • Events vs Data: should system behavior be driven by changes in business context (as signaled by events from users, devices, or other systems), or by changes in symbolic representations.
  • Activities vs Data: should system behavior be governed by planned activities, or by the states of business objects.
  • Activities vs Events: should system behavior be governed by planned activities, or driven by changes in business context.

Taking the Gold Customer example, a logical rule (right) is not meant to affect the architecture, but expressed at control level (left) it points to communication capabilities.

RulesBuFu1
How to express Gold customers’ rule may or may not point to communication capabilities.

The same questions arise for rules mixing functional requirements, quality of service, and operational constraints, e.g:

  • How to apportion response time constraints between endpoints, communication architecture, and applications.
  • How to apportion reliability constraints between application software and resources at location .
  • How to apportion confidentiality constraints between entry points, communication architecture, and locations.

Those questions often arise with non functional requirements and entail broader architectural issues and the divide between enterprise wide and domain specific capabilities.

Further Reading

External Links