Separation of Concerns
From a knowledge perspective, systems usually distinguish between business concerns (what it’s about), software engineering (what is required), and operations (how to use it).
That functional view of information can be further detailed across enterprise architecture layers:
- Enterprise: information pertaining to the business value of resources, processes, projects, and operations.
- Systems: information pertaining to organization, systems functionalities, and services operations.
- Platforms: information pertaining to technical resources, quality of service, applications maintenance, and processes deployment and operations.
Yet, that taxonomy may fall through when enterprise systems are entwined in physical and social environments wholly digitized. Beyond marketing pitches, the reality of what Stanford University has labelled “Symbolic Systems” may be unfolding at the nexus between information systems and knowledge management.
In their pivotal article Davis, Shrobe, and Szolovits have firmly planted the five pillars of the bridge between knowledge representation and information systems:
- Surrogate: KR provides a symbolic counterpart of actual objects, events and relationships.
- Ontological commitments: a KR is a set of statements about the categories of things that may exist in the domain under consideration.
- Fragmentary theory of intelligent reasoning: a KR is a model of what the things can do or can be done with.
- Medium for efficient computation: making knowledge understandable by computers is a necessary step for any learning curve.
- Medium for human expression: one the KR prerequisite is to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.
That puts information systems as a special case of knowledge ones as they fulfill the same principles:
- Like knowledge systems, information systems manage symbolic representations of external objects, events or activities purported to be relevant.
- System models are assertions regarding legitimate business objects and operations.
- Likewise, information systems are meant to support efficient computation and user-friendly interactions.
Yet, two functional qualifications are to be considered:
- The first is about the role of data processing: contrary to KM systems, information systems are not meant to process data into information.
- The second is about role of events processing: contrary to KM systems, information systems have to manage the actual coupling between context and symbolic surrogates.
The rapid melting of both distinctions points to the convergence of information and knowledge systems.
Data, Information, Knowledge
Facts are not given but have to be be captured as data before being processed into useful information. Depending of systems purpose, that can be achieved with one of two basic schemes:
- With data mining the aim is to improve decisions by making business sense of actual observations. Information is meant to be predictive, concrete and directly derived from data; it is a resource whose value and shelf-life are set by transient business circumstances.
- With systems analysis the aim is to build software applications supporting business processes. Information is meant to be descriptive or prescriptive, symbolic, and defined with regard to business objectives and users’ practice; it is an asset whose value and shelf-life depend on the persistency of technical architecture and the continuity of business operations.
Whatever the purpose and scheme, information has to be organized around identified objects or processes, with defined structures and semantics. When the process is fed by internal data from operational systems, information structures and semantics are already defined and data can be directly translated into knowledge (a).
Otherwise (b), the meanings and relevancy of external data has to be related to enterprise business model and technical architecture. That may be done directly by mapping data semantics to known descriptions of objects and processes; alternatively, rough data may be used to consolidate or extend the symbolic representation of contexts and concerns, and consequently the associated knowledge.
The next step is to compare this knowledge perspective to enterprise architectures and governance layers.
Enterprise Governance & Knowledge
From an architecture perspective, enterprises are made of human agents, devices, and symbolic (aka information) systems. From a business perspective, processes combine three categories of tasks: decision-making, monitoring, and execution. With regard to governance the primary objective is therefore to associate those categories to enterprise architecture components:
- Decision-making can only be performed by human agents entitled to commitments in the name of the enterprise, individually or collectively. That is meant to be based on knowledge.
- Executing physical or symbolic processes can be done by human agents, individually or collectively, or by devices and software systems subject to compatibility qualifications. That can be done with information (symbolic flows) or data (non symbolic flows).
- Monitoring and controlling the actual effects of processes execution call for symbolic processing capabilities and can be achieved by human or software agents. That is supposed to be based on information (symbolic flows).
On that basis, the business value of systems will depend on two primary achievements:
- Mapping business models to changing environments by sorting through facts, capturing the relevant data, and processing the whole into meaningful and up-to-date information.
- Putting that information into effective use through their business processes and supporting systems.
As long as business circumstances are stable, external and internal data can be set along commensurate time-spans and be processed in parallel. Along that scheme information is either “mined” from external data or directly derived (aka interpreted) from operational (aka internal) data by “knowledgeable” agents, human or otherwise.
But that dual scheme may become less effective under the pressure of volatile business opportunities, and obsolete given technological advances; bringing together data mining and production systems may therefore become both a business necessity and a technical possibility. More generally that would call for the merging of knowledge management and information systems, for symbolic representations as well as for their actual coupling with changes in environments.
Business Driven Perspective
As already noted, functional categories defined with regard to use (e.g business processes, software engineering, or operations) fall short when business processes and software applications are entwined one with the other, within the enterprise as well as without (cf IoT). In that case governance and knowledge management are better supported by an integrated processing of information organized with regard to the scope and time-span of decision-making:
- Assets: shared decisions whose outcome bears upon multiple business domains and cycles. Those decisions may affect all architecture layers: enterprise (e.g organization), systems (e.g services), or platforms (e.g purchased software packages).
- Business value: streamlined decisions governed by well identified business units driven by changing business opportunities. Those decisions provide for straight dependencies from enterprise (business domains and processes), to systems (applications) and platforms (e.g quality of service).
- Non functional: shared decisions about scale and performances driven by changing technical circumstances. Those decisions affect locations (users, systems, or devices), deployed resources, or configurations.
Whereas these categories of governance don’t necessarily coincide with functional ones, they are to be preferred if supporting systems are to be seamlessly fed by internal and external data flows. In any case functional considerations are better dealt with by decision-makers in their specific organizational and business contexts.
Events & Decision Making
Weaving together information processing and knowledge management also requires actual coupling between changes in environments and the corresponding state of symbolic representations.
That requirement is especially critical when enterprise success depends on its ability to track, understand, and take advantage of changes in business environment.
In principle, that process can be defined by three basic steps:
- To begin with, the business time-frame (red) is set by facts (t1) registered through the capture of events and associated data (t2).
- Then, a symbolic intermezzo (blue) is introduced during which data is analyzed, information updated (t3), knowledge extracted, and decisions taken (t4);
- Finally, symbolic and business time-frames are to be synchronized through decision enactment and corresponding change in facts (t5).
But that phased approach falls short with digitized environments and the ensuing collapse of fences between enterprises and their environment. In that context decision-making has often to be carried out iteratively, each cycle following the same pattern:
- Observation: understanding of changes in business opportunities.
- Orientation: assessment of the reliability and shelf-life of pertaining information with regard to current positions and operations.
- Decision: weighting of options with regard to enterprise capabilities and broader objectives.
- Action: carrying out of decisions within the relevant time-frame.
Data analysis and decision-making processes must therefore be weaved together, and operational loops coupled with business intelligence:
But that shift of the decision-making paradigm from discrete and periodic to continuous and iterative implies a corresponding alignment of supporting information regarding assets, business value, and operations.
Assuming that decisions are to be taken at the “last responsible moment”, i.e until not taking side could affect the options, governance has to distinguish between three basic categories:
- Operational decisions can be put to effect immediately. Since external changes can also be taken into account immediately, the timing is to be set by events occurring within the interval of production life-cycles.
- Business value decisions are best enacted at the start of production cycles using inputs consolidated at completion. When analysis can be done in no time (t3=t4) and decisions enacted immediately (t4=t5), commitments can be taken from one cycle to the next. Otherwise some lag will have to be introduced. The last responsible moment for committing a decision will therefore be defined by the beginning of the next production cycle minus the time needed for enactment.
- With regard to assets decisions are supposed to be enacted according to predefined plans. The timing of commitments should therefore combine planning (when a decision is meant to be taken) and events (when relevant and reliable information is at hand).
That taxonomy broadly coincides with the traditional distinction between operational, tactical, and strategic decisions.
Next, the integration of decision-making processes has to be supported by a consolidated description of data resources, information assets, and knowledge services; and that can be best achieved with ontologies.
As introduced long ago by philosophers, ontologies are systematic accounts of existence for whatever is considered, in other words some explicit specification of the concepts meant to make sense of a universe of discourse. From that starting point three basic observations can be made:
- Ontologies are made of categories of things, beings, or phenomena; as such they may range from simple catalogs to philosophical doctrines.
- Ontologies are driven by cognitive (i.e non empirical) purposes, namely the validity and consistency of symbolic representations.
- Ontologies are meant to be directed at specific domains of concerns, whatever they can be: politics, religion, business, astrology, etc.
With regard to models, only the second one puts ontologies apart: contrary to models, ontologies are about understanding and are not supposed to be driven by empirical purposes. As a corollary, ontologies could be used as templates (or meta-models) encompassing the whole range of information pertaining to enterprise governance.
Along that reasoning, the primary objective would be to distinguish contexts with regard to source and time-frame, e.g:
- Social: pragmatic semantics, no authority, volatile, continuous and informal changes.
- Institutional: mandatory semantics sanctioned by regulatory authority, steady, changes subject to established procedures.
- Professional: agreed upon semantics between parties, steady, changes subject to established procedures.
- Corporate: enterprise defined semantics, changes subject to internal decision-making.
- Personal: customary semantics defined by named individuals.
That coarse-grained taxonomy, set with regard to the social basis of contexts, should be complemented by a fine-grained one to be driven by concerns. And since ontologies are meant to define existential circumstances, it would make sense to characterize ontologies according to the epistemic nature of targeted items, namely terms, documents, symbolic representations, or actual objects and phenomena. That would outline four basic concerns that may or may not be combined:
- Thesaurus: ontologies covering terms and concepts.
- Content Management Systems (CMS): ontologies covering documents with regard to topics.
- Organization and Business: ontologies pertaining to enterprise organization, objects and activities.
- Engineering: ontologies pertaining to the symbolic representation of products and services.
That taxonomy put ontologies at the hub of enterprise architectures, in particular with regard to economic intelligence.
Insofar as enterprises are concerned knowledge is recognized as a key assets, as valuable if not more than financial ones: whatever their nature assets lose value when left asleep and bear fruits when kept awake; that’s doubly the case for data and information:
- Digitized business flows accelerates data obsolescence and makes it continuous.
- Shifting and porous enterprises boundaries and markets segments call for constant updates and adjustments of enterprise information models.
Given the growing impact of knowledge on the capability and maturity of business processes, data mining, information processing, and knowledge management should be integrated into a comprehensive and consistent framework. Sometimes labelled as economic intelligence, that approach makes a functional and operational distinction between data as resources, information as assets, and knowledge as services.
That can be achieved with ontologies used for enterprise architecture, as illustrated by the Caminao ontological kernel.
Melting the informational and behavioral schemes of knowledge management into operational systems creates a new breed of symbolic systems whose evolution can no longer be reduced to planned designs but may also include some autonomous capability. That possibility is bolstered by the integration of enterprise organization and systems with their business environment; at some point it may be argued that enterprise architectures emerge from a mix of cultural sediments, economic factors, technology constraints, and planned designs.
As enterprises grow and extend, architectures become more complex and have to be supported by symbolic representations of whatever is needed for their management: assets, roles, activities, mechanisms, etc. Hence the benefits of distinguishing between two kinds of models:
- Models of business contexts and processes describe actual or planned objects, assets, and activities.
- Models of symbolic artifacts describe the associated system representations used to store, process, or exchange information.
This apparent symmetry between models can be misleading as the former are meant to reflect a reality but the latter are used to produce one. In practice there is no guarantee that their alignment can be comprehensively and continuously maintained.
Assuming that enterprise architecture entails some kind of documentation, changes in actual contexts will induce new representations of objects and processes. At this point, the corresponding changes in models directly reflect actual changes, but the reverse isn’t true. For that to happen, i.e for business objects and processes being drawn from models, the bonds between actual and symbolic descriptions have to be loosened, giving some latitude for the latter to be modified independently of their actual counterpart. As noted above, specialization will do that for local features, but for changes to architecture units being carried on from models, abstractions are a prerequisite.
Interestingly, genetics can be used as a metaphor to illustrate the relationships between environments, enterprise architectures (organisms), and code (DNA).
According to classical genetics (left), phenotypes (actual forms and capabilities of organisms) inherit through the copy of genotypes (as coded by DNA), and changes between generations can only be carried out through changes in genotypes. Applied to systems, it would entail that changes can only happen after being programmed into the applications supporting enterprise organization and business processes.
The Extended Evolutionary Synthesis (right) considers the impact of non coded (aka epigenetic) factors on the transmission of the genotype between generations. Applying the same principles to systems would introduce new mechanisms:
- Enterprise organization and its usage of systems could be adjusted to changes in environments independently of changes in coded applications.
- Enterprise architects could assess those changes and practical adjustments, plan systems evolution, and use abstractions to consolidate their new designs with legacy applications.
- Models and applications would be transformed accordingly.
That epigenetic understanding of systems would put the onus of their evolutionary fitness on the plasticity and versatility of applications.
EA: Entropy Antidote
The genetics metaphor comes with a teleological perspective as it assumes that nothing happens without a reason. Cybernetics goes the other way and assumes that disorder and confusion will ensue from changing environments and tentative adjustments.
Originally defined by thermodynamic as a measure of heat dissipation, the concept of entropy has been taken over by cybernetics as a measure of the (supposedly negative) variation in the value of information supporting organisms sustainability. Applied to enterprise governance, entropy will be the result of untimely and inaccurate information about the actual state of assets capabilities, and challenges.
One way to assess those factors is to classify changes with regard to their source and modality.
With regard to source:
- Changes within the enterprise are directly meaningful (data>information), purpose-driven (information>knowledge), and supposedly manageable.
- Changes in environment are not under control, they may need interpretation (data<?>information), and their consequences or use are to be explored (information<?>knowledge).
With regard to modality:
- Data associated with planned changes are directly meaningful (data>information) whatever their source (internal or external); internal changes can also be directly associated with purpose (information>knowledge);
- Data associated with unplanned internal changes can be directly interpreted (data>information) but their consequences have to be analyzed (information<?>knowledge); data associated with unplanned external changes must be interpreted (data<?>information).
Assuming with Stafford Beer that viable systems must continuously adapt their capabilities to their environment, this taxonomy has direct consequences for enterprise governance:
- Changes occurring within planned configurations are meant to be dealt with, directly (when stemming from within enterprise), or through enterprise adjustments (when set in its environment).
- That assumption cannot be made for changes occurring outside planned configurations because the associated data will have to be interpreted and consequences identified prior to any decision.
Enterprise governance will therefore depend on the way those changes are taken into account, and in particular on the capability of enterprise architectures to process the flows of associated data into information, and to use it to deal with variety. On that account the key challenge is to manage the relevancy and timely interpretation and use of the data, in particular when new data cannot be mapped into predefined semantic frame, as may happen with unplanned changes in contexts. How that can be achieved will depend on the processing of data and its consolidation into information as carried on at enterprise level or by business and technical units.
Within that working assumption, the focus is to be put on enterprise architecture capability to “read” environments (from data to information), as well as to “update” itself (putting information to use as knowledge).
Enterprise Environment: Internet & Semantic Web
Nowadays and for all practicality, it may be assumed that enterprises have to rely on the internet for “reading” their physical or symbolic environment. Yet, as suggested by the labels Internet of Things (IoT) and Semantic Web, two levels must be considered:
- Identities of physical or social entities are meant to be uniquely defined across the internet.
- Meanings are defined by users depending on contexts and concerns; by definition they necessarily overlap or even contradict.
Depending on purpose and context, meanings can be:
- Inclusive: can be applied across the whole of environments.
- Domain specific: can only be applied to circumscribed domains of knowledge. That’s the aim of the semantic web initiative and the Web Ontology Language (OWL).
- Institutional: can only be applied within specific organizational or enterprise contexts. Those meanings could be available to all or through services with restricted access and use.
That can be illustrated by a search about Amedeo Modigliani:
- An inclusive search for “Modigliani” will use heuristics to identify the artist (a). An organizational search for a homonym (e.g a bank customer) would be dealt with at enterprise level, possibly through an intranet (c).
- A search for “Modigliani’s friends” may look for the artist’s Facebook friends if kept at the inclusive level (a1), or switch to a semantic context better suited to the artist (a2). The same outcome would have been obtained with a semantic search (b).
- Searches about auction prices may be redirected or initiated directly, possibly subject to authorization (c).
Given the interconnection with their material and social environments, enterprises have to align their information systems on internet semantic levels. But knowledge not being a matter for boundaries, there are clear hazards for enterprises to expose their newborn knowledgeable systems to the influence of anonymous and questionable external sources.
EA: PAGODA ARCHITECTURE BLUEPRINT
The impact of digital environments goes well beyond a shallow transformation of digitized business processes and requires a deep integration of enterprises ability to refine data flows into information assets to be put to use as knowledge:
- Acquisition of business data flows at platform level.
- Integration of business intelligence and information models.
- Integration of information assets with knowledge management and operatioonal decision-making.
Such an information backbone supporting architecture layers tallies with the Pagoda architecture blueprint whose ubiquity all around the world demonstrates the effectiveness in ensuring resilience and adaptability to external upsets.
Porous Borders and Unfathomable Environments
Since knowledge cannot be neatly tied up in airtight packages, enterprise systems have to exchange more than data with their environment. Dedicated knowledge management systems, by filtering the meanings of incoming information, insulate core enterprise systems from stray or hazardous interpretations. But intertwining KM and production systems makes the fences more porous, and the risks are compounded by the spread of intelligent but inscrutable systems across the net. As a consequence securing accesses to information is not enough and systems must also secure their meanings (inclusive, specific, or institutional), and their origin.
For that purpose a distinction has to be made between two categories of “intelligent” sources:
- Fellow KM systems relying on symbolic representations that allow for explicit reasoning: data is “interpreted” into information which is then put to use as the knowledge governing behaviors.
- Intelligent devices relying on neuronal networks carrying out implicit information processing: data is “compiled” into neuronal connections whose weights (representing knowledge ) are tuned iteratively based on behavioral feedback.
On that basis systems capability and responsibility can be generalized from enterprise to network level:
- Embedded knowledge from identified and dedicated devices can directly feed process control whenever no responsibility is engaged (a).
- Feeding explicit knowledge management with external implicit (aka embedded) knowledge is more problematic due to the hazards of mixed and potentially inconsistent semantics (b).
- Symbolic knowledge can be used (c) to distribute information, or support decision-making and process control.
As a concluding remark, it appears that the convergence of information and knowledge management systems is better apprehended in the broader perspective of a intertwined network of symbolic systems characterized by their status (inclusive, specific, organizational), and modus operandi (explicit or implicit).
- The Book of Fallacies
- Reflections for the Perplexed
- Knowledge Architecture
- Enterprise Governance & Knowledge
- EA & Documentation
- Conceptual Thesaurus
- Events & Decision-making
- Knowledge Based Model Transformation
- Data Mining & Requirements Analysis
- Abstraction & Emerging Architectures
- Sifting through a web of things
- Semantic Web: from Things to Memes
- Reinventing the wheel
- AI & Embedded Insanity
- Detour from Turing Game
- Ontologies & Models
- Ontologies & Enterprise Architecture
- Ontologies as Productive Assets
- Open Concepts