Learning means pushing the edges of knowledge; for decision-making that’s discovering similarities and patterns in datasets (observation), sorting out causation from correlations (orientation), setting apart accountability from traceability (decision), and improving organisational and operational integration (action).
Such a learning process must encompass environments, organization, and systems, and take into account the epistemic status of representations: actual, past, planned, expected, hypothetical, virtual, fictional, etc. That can only be achieved with ontologies.
Observation: Environment & data
The aim of OODA’s observation is to lay the groundwork for reliable appraisal of the present or future states of environments. To that effect observation is supposed to start with raw data and meant to deliver relevant and reliable information.
Data & Uncertainties
Observation can rely on four kinds of resources:
- Datasets obtained from environments (data mining) or operations (process mining); datasets can be exhaustive (population) or selective (samples)
- External events reflecting changes in environments
- Internal (enterprise) events reflecting changes in managed information triggered by applications
- Aggregates measurements from external sources
Metadata documenting the provenance of resources (where, when, and how they were obtained, who is the provider) are meant to be managed outside the OODA loop.
Conversely, data resources themselves can be improved within OODA loops, enhancing the relevance of observations (data) with regard to current models (information) and objectives (knowledge).
Without entering into the nuts and bolts of data analytics, the main issue for observation is to deal with the heterogeneity and uncertainty of sources; hence the benefits of using ontologies to manage the different kinds: events, managed information, populations, or datasets.
Frequencies & Probabilities
Uncertainty is arguably at the root of decision-making processes, whether it is for the measurement of facts or the relevance of issues. With regard to statistics the issue is the distinction between frequencies and probabilities:
- Frequencies deal with data observations without making explicit assumptions about the relationships between observation and reality; e.g. rolling dices or flipping coins
- Probabilities deal with expectations that could be worked out from the observation of bounded populations (e.g. pebbles in urns, bank customers’ loans) or samples meant to represent unbounded populations (e.g. public sector employees, bank loans in general population)
The role of observation is thus to sort the resources accordingly:
- Statistics can be obtained for the ratio of employees in the public sector, but the ratio of employees with bank loans can only be estimated from samples
- Frequencies can be computed for the bank’s loans to its customers, or the defaults thereof
- But there is no point in computing the frequency of defaults for public employees among the bank customers: if observations are meant to orient a loan policy they must apply probabilities to the whole population
Partitions & Correlations
Beyond their use and misuse, the rapid and ubiquitous expansion of Machine learning technologies have put the focus on the distinction between data (inputs) and information (outcomes); an all the more necessary distinction when, as it’s usually the case for decision-making, inputs combine observations (data) with models (information).
Taking the loans exemple, the objective could be to identify marketing segments (targeted variables) based on observed data and managed information (explanatory variables):
- Partitions describe all kinds of classifications (segments, data level)
- Powertypes describe partitions applied to managed representations (categories, information level)
- Marketing (Virtual) partitions can be defined by crossing managed ones (epistemic status, knowledge level)
Marketing partitions would be applied to datasets (data) in order to identify correlations between values of independent variables defined by powertypes (information) and values of dependent variables (e.g. consumer preferences):
The outcomes would then be used to define market segments (knowledge):
Improvements (or learning) would come from honing the dataset with regard to independent (or explanatory) variables, e.g. adding age and income, and then tuning the cursors on data, e.g. age intervals.
Complexity & Business Intelligence
Being set at the entry point of decision-making processes observation must strike a balance between the complexity of environment and their relevance with regard to managed information; hence the need of a yardstick to assess and improve the effectiveness of partitions.
Taking cue from cybernetics, entropy can be used to assess complexity, or more precisely the part of datasets that cannot be explained by partitions, between zero (all instances in each segment share the same values for explanatory variables), and 1 (the values for explanatory variables are equally distributed within segments).
But complexity by itself is of little use as demonstrated ad absurdum by the same null entropy (or perfect correlation) obtained when a single explanatory variable is applied. Taking the loan defaults exemple, a single explanatory variable (e.g. account balance, under or above 500) will result in zero entropy (full alignment of dependent and independent variables).
Entropy should thus be used as a constraint, the objective being to maximise the information extracted from datasets while keeping complexity under lid. For instance, provided that other factors or circumstances remain the same (ceteris paribus), account balances would be neatly correlated with loan defaults … a useless truism for decision-makers. Introducing activity sector (public or private) as an explanatory variable would add a new dimension and therefore increase complexity (entropy .7), but it would also enable a better understanding of customers’ behaviors.
That exemple illustrates how ontologies can help observation by making explicit the trade-offs between the complexity of environments and the relevancy of business intelligence.
Pairing decision-making and business intelligence upfront turns observation into a dual entry gate for data flows:
- Towards predictive models, in support of business intelligence, targeting enterprises’ environment
- Towards descriptive models, ensuring the efficiency and sustainability of business processes, targeting organization and systems
These agendas may be conflicting: on the one hand predictive models focused on the shifting complexity of competitive environments, on the other hand descriptive models focused on the stability of shared representations. Nonetheless, predictive (business intelligence) and descriptive (enterprise architecture) models must be kept geared in both directions: architecture capabilities evolving with business models, and business processes making the best of architecture capabilities.
The integration of machine learning (implicit knowledge) and knowledge graphs (explicit knowledge) technologies enables some event-triggered shortcuts, directly from observation to decisions based on “canned” orientation, or even to immediate actions.
Otherwise orientation is needed to deal with massive and unstructured flows of data, calling for trade-offs between too much complexity (high entropy) and the overfitting of explanatory variables and dataset (low entropy).
Orientation: Systems & information
The aim of OODA’s orientation is to set the rationales for the courses of action. To that end, given relevant and reliable observations, orientation will map contexts, set time frames, and build causal chains.
Context & Purposes
Taking cue from travellers, maps can serve two kinds of purpose, sightseeing or planning. While only the latter is directly involved in decision-making, in both cases orientation starts with identifying territories (e.g. populations of loans and defaults), and the relevant maps (types of managed information).
But as already stressed, enterprises’ decision-making can be set in multiple temporal as well as epistemic dimensions.
Horizons & Time frames
Setting aside labelling controversies, enterprises’ decision-making time frames are typically defined as operational, tactical, and strategic; these horizons can be characterized by the availability of information:
- Operational: observations can be directly mapped to information and put to use as knowledge, thus allowing for routine decision-making.
- Tactical: partial or imperfect information can be improved with additional observations until causal chains can be secured.
- Strategic: unreliable or insufficient observations and doubtful causal chains prevent cost/benefit assessments within the time frames considered.
Corresponding cycles can be nested depending on the kind of orientation:
- Settled orientation: causations built on reliable observations (data), stable models (information), and well-defined business options (knowledge).
- Shifting orientation: causations built on reliable observations and well-defined business options, but conditioned by changes in the representation of managed information.
- Subsumed orientation: causations built on reliable observations and stable models, but subject to changes in business options.
The issue is then to ensure the interoperability of orientations (and consequently decision) across time frames.
Enterprise decision-making is usually set across multiple epistemic dimensions, physical or symbolic, actual or virtual, possibly overlapping; typical examples are: markets (spacial), forecasts (temporal), project management (functional), game theory (behavioral), or designs (structural).
Epistemic dimensions (or modalities) are often confused with business domains and uniformly represented by semantic networks or conceptual graphs. That misconception overlooks the ontological difference between representations; to take a simple exemple: an actual vehicle, a concept car, a car design, a digital car in a video, a car character in a story, can exist in parallel yet connected dimensions. An undifferentiated (or flattened) representation of these dimensions using relationships will produce an exponential complexity preventing effective orientation; hence the need of ontologies to represent epistemic modalities.
In the loans exemple epistemic modalities (EpistemicModes≈) are used for expected or planned business circumstances and enterprises policies. Together with Dataset_ representing actual collections of identified individuals (e.g. current bank’s customers), Population_ is used to represent virtual collection of unidentified ones (e.g. planned bank’s customers and loans). Kernel reasoning connectors are then used between actual and virtual representations (e.g. current and planned bank’s customer loans).
Epistemic modalities can also be applied to hypothetical representations supporting the behavior of external agents (typically competitors), bringing all virtual realities under a common roof, and opening the door to strategic planning and decision games.
Discriminative & Generative Models
In contrast to observation which is focused outward on data from environments (aka territories), orientation looks inward to information models (aka maps), more precisely on the symbolic representation of objects, events, and activities managed by enterprises. The aim being to identify causations through the matching of observed correlations and patterns (data) to managed structures and semantics (information).
The pivot, from data and correlations to information and causations, coincides with a statistical one from discriminative to generative models, the former focused on labelling datasets, the latter on explaining them.
Taking the loans exemple, a discriminative approach will try to classify loan defaults with regard to explanatory variables (e.g. age, income, or activity sector), without considering direct action in business environment. Alternatively, a generative approach will look back and consider how a dependent variable like loan repayment method (say A and B) could be used to reduce loan defaults, given some knowledge of explanatory variables.
That dual orientation, like the use of map for sightseeing (discriminative) and planning (generative), also corresponds to analytical shifts: from features to individuals, and from correlation to causations.
Besides nested time frames, the benefits of using ontologies to integrate business intelligence with OODA decision-making appear clearly when conditional (aka Bayesian) probabilities are applied to orientation. Bayesian methods, which determine probabilities based on prior knowledge of conditions deemed to be relevant, are at the core of ML technologies; applied dynamically across the OODA loop they can gear decision-making and learning processes through descriptive and predictive models.
Specifically, conditional probabilities can help to decide whether there is enough information to make a decision, more observations are needed, or a planned course of action can be carried out directly.
Decision: Risks & Collective Knowledge
Compared to problem-solving, enterprises’ decision-making entails multiple individual and collective commitments set in shifting and uncertain environments. Hence the three facets of decisions: timing, judgment, and accountability.
Risks & Timing
As stressed before, enterprise decision-making processes are framed by time and uncertainties. Assuming that observation and orientation have done the groundwork sorting out options and timescales, decisions must be planned as to maximise returns and minimise risks; for returns, decisions can be put off until the “last responsible moment” when delaying commitments would reduce the range of options or expected benefits; for risks, postponements may help, e.g.:
- Operational risks stem from implementation challenges; they can be reduced through more training
- Tactical risks stem from insufficient information; improvements can be achieved by postponing decisions until the last responsible moment.
- Strategic risks stem from a lack of visibility; reducing time frames (shorter horizons) can improve visibility.
Reason vs Judgment
When tied with information and knowledge, the transition between orientation and decision raises a dual issue, on the one (information) hand with regard to reasoning, on the other (knowledge) hand with regard to judgement. In between the objective is to draw a line between decisions that can be carried out directly (business rules), and decisions that combine reasoning (deduction or induction) and judgment. That distinction takes a particular importance for enterprises immersed in digital environments pervaded by AI and ML technologies as it marks the limit between what can be delegated to systems and what should remain under the responsibility of the organization.
To begin with, ontologies can set apart business from reasoning rules, the former (businessRule_) for decisions that may directly affect environments, e.g. loans are declared in default when overdue is more than 20 days (pushRule_); the latter (logicExpression_) for reasoning affecting representations without direct consequences in business environment, e.g. default expectations for loans with small overdue and low account balance (rpDeduction_).
While decisions may involve judgment with both reasoning (induction) and business rules, the distinction can be aligned with the ontological one between alethic and deontic logic modalities, respectively.
Then, decisions should be backed by goals and rationales attached to roles and/or organizational units.
Buy House Process: Generic representation and orientation instance
Finally, pairing decision-making processes with epistemic modalities enables the representation of multi-agents systems combining goals (or purposes) intents (or beliefs) with the changing states of data, information, and knowledge.
On a broader perspective the mapping of decision-making in terms of alethic (truth-based) and deontic (value-based) rationales can set the grounds of governance ethics.
Organization & Accountability
As already noted, orientation can serve two different kinds of purpose, sightseeing or planning. While only the latter entails actual decisions, the former can rely on probing actions to improve prior knowledge and consequently actual decisions. The first objective should thus to ensure full explainability through a built-in distinction between traceability (for reasoning) and accountability (for judgements). Ontologies could then be used to frame decisions with regard to underpinnings and organization:
- Systems, for reasoning
- External agents (as actors), for personal accountability
- Roles, for functional accountability
- Internal agents, for functional and personal accountability
- Collective agents, for organizational accountability
Such organizational transparency combined with explainability add critical learning capabilities to decision-making processes.
Explainability is arguably a major hindrance to the generalization of AI and ML technologies in enterprises’ systems. Yet most of the difficulty may come from the confusion between problem-solving and decision-making, and more precisely between traceability and accountability. Hence the benefits of matching decision-making with the types of inputs (data, information, knowledge), agents (external actors, enterprise agents, systems), ensuring:
- Transparency with regard to the nature of contributing agents: external, system, enterprise individual or collective entity
- Traceability with regard to reasoning processes: observations, deduction, business rule, inference, or judgment
- Accountability with regard to the identity of contributing agents
To that end decisions must be explicit about assignments and the way actions will be carried out, in particular when decisions encompass multiple commitments over lengthy periods. Setting the course of action dynamically at the decision stage will enhance explainability and more generally learning capabilities.
If decision-making processes are framed by horizons, typically operational, tactical, or strategic, transitions should be defined accordingly:
- Forward (operational): action can be carried out
- Backward (tactical): fine-tuning of decision based on further observation
- Reset (strategic): reassessment of horizons through renewed observation and orientation
At this stage (assuming decisions have been made), what remains is timing, namely how to improve the effectiveness of actions without affecting the expected benefits of decisions.
Action: Operations & Experience
The last step of the OODA loop is meant to carry out enterprises’ commitments with regard to their environment. In principle decisions taken in any given iteration can only be amended or reversed in subsequent ones, in practice courses of action are often set across multiple iterations making room for adjustments depending on the way commitments are carried out.
From a management perspective courses of action can be regrouped into three basic schemes:
- Instant execution without further observation or orientation
- Commitment with delayed execution subject to further observation
- Phased and monitored execution driven by observation and orientation
But when decisions are meant to be executed in digital environments subject to continuous changes, modus operandi must be segmented into holistic execution units along Aristotle’s classical three unities: one goal, one organizational unit, one time frame.
Then, assuming a consistency and transparency of orientation and decision-making, modus operandi should ensure the traceability and accountability of execution; that can be achieved through a clear distinction of agency between people and systems materialized by tools’ interfaces & configurations.
Finally, when modus operandi is supported by ontological prisms agency can be crossed with symbolic resources, enhancing direct experience (operations) and collective learning (organization):
Operations & Experience
Experience stems from action and thus relies on implicit knowledge; for enterprises that means operations, with direct collaboration and systems’ fine-tuning (a). Transforming operational (implicit) knowledge into managed (explicit) knowledge can be done at system level through collaboration (b) or process mining (c). With regard to systems process mining provides the cog between action and observation as well as the gear across OODA iterations, thus serving two main purposes:
- Continuous monitoring and improvement of actual processes
- Tuning of information models in relation with operational data
With enterprises immersed in digital environments, operational assessments can be integrated into decision-making processes, feeding continuing learning supported by ontologies and ML technologies. That operational osmosis between enterprises and their environment provides the basis of collective learning (d).
Organization & Collective Learning
As far as enterprises are concerned, success is first and foremost a matter of collective achievement; set in competitive environments that cannot be sustained without continuous collective learning weaving together the threads of decision-making (organization) and modus operandi (operations) across personal, functional, and organizational levels:
- Personal, learning attained directly through the use of systems and/or collaboration
- Functional, learning attained through statutory contexts (e.g., a committee) or tasks (e.g., a project team)
- Institutional, learning attained between collective entities through representatives and documents
Such collective experience (implicit knowledge) and learning (explicit knowledge) can be achieved through the OODA twofold integration: (1) between action and observation and, (2) between modus operandi and knowledge management.
Transitions between action and observation stages are best defined in terms of feedback supported typically by process and data mining, the former focused on operations and systems with references to descriptive models, the latter on business intelligence and environments with references to prescriptive models.
Besides operational improvements, the objective of the transitions is to lay the foundations for further improvements, starting with observations.
- Enterprise Architecture Fundamentals
- Caminao Framework Overview
- Ontological Prisms & Digital Twins
- Knowledgeable Organizations
- Edges of Knowledge
- ABC of EA: Agile, Brainy, Competitive
- EA in bOwls (Overview)
- Caminao kernel & Conceptual Modeling
- Strategies: Augmented vs Virtual Realities
- Digital Strategy
- EA Patterns: Issues & Stereotypes
- Dynamic Epistemic Logic