Requirements are meant to describe systems in their business context, models describe system artifacts. They should not be confused because the former are supposed to be rooted in concrete descriptions, while the latter aim at their abstract representation.
Models are built from nodes and associations. Nodes refer to instances which are supposed to be uniformly and consistently identified in both business and system contexts; associations may refer to instances (relationships and flows) or classes (specialization and generalization), the former with consistent semantics for business and system realms, the latter with semantics specific to system artifacts.
Along that perspective, the mapping of requirements into models can be achieved by applying selectively the two faces of abstraction: first removing information from the description of actual contexts, then building symbolic representations according to business concerns.
Starting with requirements, the challenge is therefore to move up and down the abstraction ladder until one gets the focus right, providing a clear and sharp picture of business context and concerns.
With regard to systems engineering, models’ semantics are unambiguously determined by their target: business environments or systems artifacts:
Models of business environments describe the relevant features of selected objects and behaviors, including supporting systems. Such models are said extensional as they target subsets of actual contexts.
Models of systems artifacts specify the hardware and software components of supporting systems. Such models are said intensional as they define artifacts to be created.
Climbing up the abstraction ladder, the objective is to align descriptive models of objects and behaviors with prescriptive models of system artifacts. That can be achieved in three steps:
Awareness of contexts: mind the business pie, drop everything else.
Domains of concern: say what features mean.
Symbolic representations: consolidate the descriptions of surrogates.
It is worth to note that the first two levels deal respectively with instances and features of actual objects and activities, while the third deal with artifacts. As a corollary, abstraction at the first two levels should be understood in terms of partitions and subsets, with subtypes introduced only for symbolic representations.
Awareness of Context
As illustrated by sounds (filtering noises) or optics (image point), focusing is a basic perceptual task targeting actual instances of objects, events, or activities. With regard to models, it can be achieved with a pronged move:
Adjust the depth of field to encompass the relevant business context. Large depths of field (aka deep focus) will cover concerns across domains, small ones (aka shallow focus) will support specific business concerns.
Single out image points for identified objects or activities deemed to be pivotal.
A too shallow focus will capture only some of relevant objects (Piano), activities (Move) or events (Concert). Conversely, extending the focus may go too deep, including irrelevant items (Trumpet, Violinist, Illness, or Repair). Moreover, some image points may depend of others for their identity (Dimensions), or be pointless altogether (Broom).
Domains of Concerns
While business contexts are the same for all, business concerns are by nature specific to domains. The challenge for requirements capture is therefore to anchor specific features to shared objects and activities whose identities are set by business context.
For that purpose concerns are to be organized into domains responsible for the identification of anchors (objects, agents, activities) and the semantics of features:
Shared domains deal with anchors whose continuity and consistency have to be managed across domains, independently of activities.
Specific domains deal with anchors whose continuity and consistency can be managed within a single domain.
At this stage the challenge is to distinguish between identified instances of business objects (piano, concert) and processes (cleaning, moving, playing) on one hand, and the description of roles (mover, cleaner, pianist) and business logic (clean, move, play) on the other hand .
It must be reminded that such models are still at ground level as they describe sets of instances; yet, they can also be seen as the first step up the abstraction ladder, as their objective is to extract relevant features and overlook others.
While business concerns are partial and biased, the symbolic representations managed at system level must be comprehensive and consistent; that’s the objective of requirements analysis.
To start with, symbolic representations are introduced for each set of objects, roles or activities:
Objects surrogates: used to manage the continuity and consistency of business objects independently of business processes (piano, concert).
Process surrogates: used to manage the continuity and consistency of business operations independently of business objects (move, play, clean).
Roles: used to manage the interactions between actual agents and system functionalities (mover, cleaner, pianist). When the continuity and consistency of operations performed by agents are managed (e.g pianist), roles must be associated to surrogates.
Activities: used to describe business logic (move, play, clean).
Those are “flat” descriptions representing ground level instances. In order to be effectively supported by systems, models may have to be expanded downward by specialization, or upward by generalization.
Levels of Abstraction
As already noted, specialization and generalization are not symmetric because, contrary to the former operation, the latter one does modify the semantics of existing artifacts.
The purpose of specialization is to introduce specific descriptions for subsets of instances or features. For instance, assuming requirements are about moving pianos, the representation must climb one step down the abstraction ladder, from concerts to concerts with pianos:
Solo piano concerts are a subset of concerts subject to the same identification mechanisms and integrity constraints (strong inheritance).
The description of moving operations is not used to manage instances and its specialization is only about features (weak inheritance).
The purpose of generalization is twofold as it may be limited to features (aka aspect generalization) or targets both identification mechanisms and features (aka object generalization).
Aspect generalization introduces a base artifact for the description for shared features. Such base artifact is said to be abstract because its inheritance is limited to features and it cannot support the instantiation of surrogates, specialized artifacts keeping their own identification mechanisms. As a corollary, the level of abstraction is not modified because the model remains anchored to the same sets of instances. For instance (a), some administrative procedures can be defined uniformly for all maintenance operations otherwise described and executed independently.
That’s not the case with object generalization which redefines the initial sets of surrogates as subsets of newly created super-sets. For instance (b), a cleaning process becomes a maintaining processes without the repair extension. Since maintaining processes can be created as simple cleaning or repairing ones, the model is anchored to different levels of abstraction. And since descriptions should not cross levels, roles must be specialized similarly: maintainers are to be identified as such before being qualified as mechanics, even if their interventions are not managed as such (inheritance of transient identification mechanism).
Getting A Proper Grip
Models are neither true or false and can only be assessed for consistency and effectiveness.
Hence, assuming that (1) systems are meant to handle surrogates of business objects and processes and, (2) those surrogates are designed from models, it ensues that (3) a litmus test of model effectiveness would be the grip it provides on relevant objects and processes.
And that can only be achieved by pinning models to concrete and identified business objects and processes. That provides a template for modeling grips: concrete descriptions with primary identification in the middle, abstract ones above, aspects or concrete descriptions with inherited identification below.
Engineering is all about purposeful and effective design, Art will know nothing of that. Yet, software being made of symbolic constructs, creativity is undoubtedly a core constituent of its design, and beauty a sure sign of its quality.
Engineering is about the making of artifacts, software engineering about the making of symbolic ones. Assuming (for the sake of the argument…) a complete and consistent set of requirements with a given lifespan, engineering processes mix reuse of existing artifacts or procedures with the creation of new ones, making room for innovation and creativity.
With regard to software engineering the cases and economics of reuse can be judged according reasoned arguments, but what about innovation ?
Innovation vs Invention
As far as engineering is concerned, innovation is bounded by the achievements of science and technology:
Science builds symbolic representations (aka models) of the world as well as the artifacts needed to check their validity.
Technology considers how to build artifacts, eventually (but not necessarily) based upon scientific models.
Engineering considers how to efficiently build artifacts under given economic conditions.
Engineering of artifacts usually encompasses two basic phases, design and production, the outcome of the former being a set of descriptions, the outcome of the latter batches of products. In that context, innovation may apply to new descriptions (e.g the design of a new weapon or a new recipe), or new modus operandi (e.g the sequencing of tasks).
As illustrated by cooking recipes, designs and modus operandi (MO) are not necessarily supported by symbolic descriptions like blueprints or flowcharts; as it happens, that difference may mark the boundary between craft and engineering, the former based on individual skills and personal transmission, the latter a collective endeavor supported by disembodied documentation.
That distinction may also be used to distinguish between innovation and invention, the former possibly (but not necessarily) a matter of craft passed on through imitation, the latter a collective achievement necessarily supported by symbolic descriptions.
Circle, Wheel, Loops
Invention is not easily defined, as demonstrated by the different ways intellectual property rights, and especially patents, are managed across countries. To begin with, two principles are widely agreed upon:
Patents are formal descriptions and therefore belong to the realm of symbolic representations.
Patents are not rights to use what is described but rights to exclude others from using those descriptions.
Then, policies differ about patentable subject matter, especially with regard to tangible artifacts (a) as opposed to intangibles like business (b) and computation (c) methods. In other words consensus break down when symbolic descriptions without actual counterparts are considered, which is the case for software.
Even if engineering is circumscribed to the making of artifacts, and business methods are excluded, the problem remains for computation methods when incorporated as software artifacts. In that case symbolic descriptions and actual outcomes are merged into source programs and the distinction between design and production is rubbed out. As a corollary, invention, which was previously associated to the design of actual artifacts, is now deprived of actual context and can only be assessed within its symbolic realm.
But dematerialized inventions are not patent matter, as can be seen with loops: on one hand, with trustworthy predefined language constructs widely available, it doesn’t make sense to “invent” new ways to program iterations; on the other hand patenting those constructs would force designers into squaring circles. More generally, patenting business or computation iterations as inventions would cast a shadow on an uncountable set of activities.
The answer to this conundrum is the distinction between invention and problem solving, the former to be dealt with by engineering, the latter by design.
Engineering & Design
As far as software engineering is concerned, creativity may target two kinds of problems: enterprise organization and applications development.
Given business processes set by objectives and business rules, the problem is to describe how they will be supported by systems functionalities. Since solutions include the definition of features and use of actual devices they can be seen as patentable subject matter.
That’s not the case when systems functionalities are given and the problem is to design the system components that will support them. Solutions will only deal with the design of software component without affecting external features and use of actual ones.
In both cases, creativity can play in two directions: finding specific solutions to specific problems within the given architectural context; or improving functional or technical architectures. While specific solutions, supposedly not reusable, cannot be considered as inventions, that’s not the case for innovations targeting organizations or systems architectures. Yet, since architectural constructs can also be regarded as solutions to problems, innovations might therefore target architectures capabilities as well as their use.
At enterprise level problems are set by business contexts and objectives, and solutions based upon enterprise capabilities: who, what, how, where, when. Innovation can only be assessed in terms of business value.
At system level problems are set by organization and solutions supported by systems functionalities: access, persistency, computation, communication, synchronization, etc. New applications can be supported by existing functional architecture or entail innovations which should be assessed in terms of functional metrics and Quality of Service.
At component level problems are set by functional requirements and solutions implemented by components design: server, boundaries, storage, middle ware, embedded units, etc. New developments can be realized using existing components, or entail changes in technical architecture which should be assessed in terms of total cost of acquisition and ownership.
At each level, creativity must remain a balancing act between architectural constraints and innovative solutions.
As long as information was just data files and systems just computers, the respective roles of enterprise and IT architectures could be overlooked; but that has become a hazardous neglect with distributed systems pervading every corner of enterprises, monitoring every motion, and sending axons and dendrites all around to colonize environments.
Yet, the overlapping of enterprise and systems footprints doesn’t mean they should be merged, as a matter of fact, that’s the opposite. When the divide between business and technology concerns was clearly set, casual governance was of no consequence; now that turfs are more and more entwined, dedicated policies are required lest decisions be blurred and entropy escalates as a consequence. The need of better focus is best illustrated by the sundry meanings given to “system”, from computers running software to enterprises running businesses, and even school of thought.
Concerns in Perspectives
As far as enterprises are concerned, systems combine human beings, devices, and software components.
From a functional perspective, their capabilities are best defined by their interactions with their environment as well as between their constituents:
Users are supposed to be actual agents granted with organizational status and responsibilities, and possessing symbolic communication capabilities.
Software components and actual devices cannot be granted with organizational status or responsibilities; the former come with symbolic communication capabilities, the latter without.
Assuming that interactions are governed by information, the objective is to understand the contribution of each type of component with regard to information processing and decision-making.
From an engineering perspective, the building of systems is all too often reduced to the development of their software constituents. As a consequence, the complexity of the forest is masked by the singularity of the trees:
The business value of applications is assessed locally instead of being driven by enterprise objectives and organizational constructs.
System models are confused with the programs used to produce software components.
Since engineering agenda are supposed to support business objectives, their decision-making processes must be aligned yet managed independently. That reasoning also applies to services management whose role is to adjust resources and software releases to operational needs.
Enterprise architectures can then be described as a cross between architecture assets (business objects, organization, technical architecture) on one hand, core processes for business, engineering and services management on the other hand.
Information is to provide the glue between architecture assets and supported processes.
Information and Architectures Levels
As understood by Cybernetics (see Stafford Beer, “Diagnosing the System for Organizations“), enterprises are viable systems whose success depends on their capacity to countermand entropy, i.e the progressive downgrading of the information used to govern interactions between systems and their environment. And that put knowledge management at the core of systems capabilities.
Knowledge is best defined as information put to use, with information obtained by adding references and sense to data. That blueprint is supposed to be repeated at all architecture levels:
At enterprise level facts pertaining to the conduct of business are captured from environments before being organized into information meant to support enterprise governance.
System level deals with symbolic descriptions of functionalities (information). At this level data is irrelevant, the objective being the consolidation and reuse of shared representations and patterns (knowledge).
The technical level is in charge of operational governance: software components, platforms capabilities, technical resources, communication mechanisms, etc. On one hand contexts are to be monitored and data translated into information; one the other hand operational decisions have to be made (knowledge) and executed which means information translated into data.
If architecture levels can be characterized by processing capabilities, their engineering must be aligned to the corresponding objectives and time frames.
System Engineering and Separation of Concerns
As demonstrated time and again by blame games around projects failures, enterprise and technical concerns are poor engineering bedfellows, and with good reasons: different contexts, concerns, skills, and time scales. Faced with the challenge of bringing enterprise and development perspectives under a single governance, engineering approaches generally follow one of two basic options:
Phased projects (P) give precedence to software development, with requirements supposed to be set at inception and acceptance performed at completion.
Agile projects (A) give precedence to business requirements, with development iterations combining specifications, programming, and acceptance.
While each approach has its merits, agile for complex but well circumscribed projects, phased for large projects with external organizational dependencies, the difficulties each may encounter point to the crux of the matter, namely the separation of concerns between business goals and information technology, bypassed by agile approaches and misplaced by phased ones:
Agile development models are driven by users’ value and based on the implicit assumption that business and technology concerns can be dealt with continuously and simultaneously. That may be difficult when external dependencies cannot be avoided and shared ownership cannot be guaranteed.
Phased development models take a mirror position by assuming that business concerns can be settled upfront, which is clearly a very hazardous policy.
Those pitfalls may be overcome if engineering processes take into account the distinction between knowledge management on one hand, software development on the other hand.
Knowledge management (KM) encompasses all information pertaining to the conduct of business: environment, markets, objectives, organization, and projects. With regard to engineering projects, its role is to define and consolidate the descriptions of symbolic representations to be supported by information systems.
Software development (SD) starts with symbolic descriptions and proceeds with the definition, building and acceptance of the corresponding software artifacts.
Service management (SM) provides the bridge between engineering and operational processes.
Capture of information from data and legacy code can also be achieved respectively by data mining and reverse engineering.
Depending on organizational or technical dependencies, knowledge management and software development will be carried out within integrated development cycles (agile processes), or will have to be phased in order to consolidate the different concerns.
That engineering distinction neatly coincides with the functional divide of architectures: knowledge management supporting enterprise architecture, software development supporting system architecture.
Architectures and Engineering Processes
Business processes are governed by collective representations mixing goals, models and rules, not necessarily formally defined, and subject to change with opportunities. Engineering processes for their part have to be materialized through work units, models, and products, all of them explicitly defined, with limited room for change, and set along constraining schedules. Given that systems’ fate hangs on the hing between business and engineering concerns, the corresponding perspectives must be properly aligned.
That can be achieved if workshops are associated with architecture levels, and work units defined according model layers and development flows:
Knowledge management takes charge of symbolic descriptions pertaining to enterprise concerns. Its objective is to map business and operational requirements with the functionalities of supporting systems. Expressed in MDA parlance, the former would be described by computation independent models (CIMs), the latter by platform independent models (PIMs).
Software development deals with software artifacts. That encompasses the consolidation of symbolic descriptions into functional architectures, the design of software components according to platforms specificity (PSMs), and the production of code according to deployment targets (DPMs).
IT Service Management is the counterpart of knowledge management for actual operations and resources. Its objective is to synchronize business and development time-frames and align operational requirements with releases and resources.
That congruence between architecture divides (enterprise, systems, technology), models layers (CIMs, PIMs, PSMs, DPMs), and engineering concerns (facts or legacy, information, knowledge), provides a reasoned and comprehensive framework for enterprise architectures.
Architecture Capabilities & Separation of Concerns
Architectures describe stocks of shared assets, processes describe flows of changes. Given a hierarchized description of architectures, the objective is to ensure the traceability of concerns and decisions across levels and processes.
Who: enterprise roles, system users, platform entry points.
What: business objects, symbolic representations, objects implementation.
How: business logic, system applications, software components.
When: processes synchronization, communication architecture, communication mechanisms.
Where: business sites, systems locations, platform resources.
The rationale of objectives and decisions (the “Why” of Zachman framework) is expressed by dependency links according to the nature of primary factors:
Deontic dependencies are set by external factors, e.g regulatory context, communication technologies, or legacy systems.
Alethic (aka modal) dependencies are set by internal policies, based on the assessment of options regarding objectives as well as solutions.
This distinction between dependencies is critical for decision-making and consequently for enterprise governance. Set within time-frames and decorated with time related features, those dependencies can then be consolidated into differentiated strategies for business, engineering and operational processes.
Given that dependencies are usually interwoven, governance must be aligned with the footprints of associated decisions:
Assets: shared decisions affecting different business processes (organization), applications (services), or platforms (purchased software packages).
Users’ Value: streamlined decisions governed by well identified business units providing for straight dependencies from enterprise (business requirements), to systems (functional requirements) and platforms (users’ entry points).
Non functional: shared decisions about scale and performances affecting users’ experience (organization), engineering (technical requirements), or resources (operational requirements).
That classification can be used to choose a development model:
Agile approaches should be the option of choice for requirements neatly anchored to users’ value.
Phased approaches should be preferred for projects targeting shared assets across architecture levels. When shared assets are within the same level epic-like variants of agile may also be considered.
Architectures, Processes, and Governance
While there is some consensus about the scope and concerns of Enterprise Architecture as a discipline, some debate remains about the relationship between enterprise and IT governance.
Beyond turf quarrels, arguments are essentially rooted in the distinction between information and supporting systems or, more generally, between organization and IT.
To some extent, those arguments can be ironed out if governance were set with regard to scope, actual or symbolic:
Actual scope deals with current or planned business objects, assets, and processes.
Symbolic scope deals with the design of the corresponding software components.
That distinction matches the divide between enterprise and systems architectures: one set of models deals with enterprise objectives, assets, and organization, the other one deals with system components.
With regard to processes, governance should distinguish between business and engineering:
Business processes are clearly designed at enterprise level, both for their organization and the symbolic description of system representations.
Software engineering ones are under the responsibility of IT governance, from system and software architecture to release and deployment.
While governance could be shared, there is no reason to assume immediate and continuous alignment of perspectives; that could only be achieved through architecture perspective:
Actual architectures encompass business organization (locations, agents, devices) and systems deployments.
Symbolic architectures include systems functionalities and software development.
And, as a mirror achievement, processes take charge of transitions between actual and symbolic architectures:
Symbolic architectures provide the mediation between business requirements and software development (As).
Physical architectures do the same between software development and services management (Ap).
Business processes take responsibility for the mapping of enterprise architecture into symbolic representations (Pb).
Engineering processes do the same for the alignment of software and deployment architectures (Pe).
Enterprise and IT governance can then be defined in terms of Knowledge management, with engineering and business processes gathering data from their respective realms, sharing the information through architectures, and putting it to use according their respective concerns.
As illustrated by he Symbolic Systems Program (SSP) at Stanford University, advances in computing and communication technologies bring information and knowledge systems under a single functional roof, namely the processing of symbolic representations.
Within that understanding one will expect Knowledge Management to shadow systems architectures and concerns: business contexts and objectives, enterprise organization and operations, systems functionalities and technologies. On the other hand, knowledge being by nature a shared resource of reusable assets, its organization should support the needs of its different users independently of the origin and nature of information. Knowledge Management should therefore bind knowledge of architectures with architecture of knowledge.
In their pivotal article Davis, Shrobe, and Szolovits set five principles for knowledge representation:
Surrogate: KR provides a symbolic counterpart of actual objects, events and relationships.
Ontological commitments: a KR is a set of statements about the categories of things that may exist in the domain under consideration.
Fragmentary theory of intelligent reasoning: a KR is a model of what the things can do or can be done with.
Medium for efficient computation: making knowledge understandable by computers is a necessary step for any learning curve.
Medium for human expression: one the KR prerequisite is to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.
That puts information systems as a special case of knowledge ones, as they fulfill the five principles, yet with a functional qualification:
The only difference is about coupling: contrary to knowledge systems, information and control ones play a role in their context, and operations on surrogates are not neutral.
Knowledge constructs are empty boxes that must be properly filled with facts. But facts are not given but must be observed, which necessarily entails some observer, set on task if not with vested interests, and some apparatus, natural or made on purpose. And if they are to be recorded, even “pure” facts observed through the naked eyes of innocent children will have to be translated into some symbolic representation. Taking wind as an example, wind socks support immediate observation of facts, free of any symbolic meaning. In order to make sense of their behaviors, wanes and anemometers are necessary, respectively for azimuth and speed; but that also requires symbolic frameworks for directions and metrics. Finally, knowledge about the risks of strong winds can be added when such risks must be considered.
As far as enterprises are concerned, knowledge boxes are to be filled with facts about their business context and processes, organization and applications, and technical platforms. Some of them will be produced internally, others obtained from external sources, but all should be managed independently of specific purposes. Whatever their nature (business, organization or systems), information produced by the enterprises themselves is, from inception, ready to use, i.e organized around identified objects or processes, with defined structures and semantics. That’s not necessarily the case with data reflecting external contexts (markets, regulations, technology, etc) which must be mapped to enterprise concerns and objectives before being of any use. That translation of data into information may be done immediately by mapping data semantics to identified objects and processes; it may also be delayed, with rough data managed as such until being used at a later stage to build information.
From Data to Information
Information is meaningful, data is not. Even “facts” are not manna from heaven but have to be shaped from phenomena into data and then information, as epitomized by binary, fragmented, or “big” data.
Binary data are direct recording of physical phenomena, e.g sounds or images; even when indexed with key words they remain useless until associated, as non symbolic features, to identified objects or activities.
Contrary to binary data, fragmented data comes in symbolic guise, but as floating nuggets with sub-level granularity; and like their binary cousin, those fine-grained descriptions are meaningless until attached to identified objects or activities.
“Big” data is usually understood in terms of scalability, as it refers to lumps too large to be processed individually. It can also be defined as a generalization of fragmented data, with identified targets regrouped into more meaningful aggregates, moving the targeted granularity up the scale to some “overwhelming” level.
Since knowledge can only be built from symbolic descriptions, data must be first translated into information made of identified and structured units with associated semantics. Faced with “rough” (aka unprocessed) data, knowledge managers can choose between two policies: information can be “mined” from data using statistical means, or the information stage simply bypassed and data directly used (aka interpreted) by “knowledgeable” agents according to their context and concerns.
As a matter of fact, both policies rely on knowledgeable agents, the question being who are the “miners” and what they should know. Theoretically, miners could be fully automated tools able to extract patterns of relevant information from rough data without any prior information; practically, such tools will have to be fed with some prior “intelligence” regarding what should be looked for, e.g samples for neuronal networks, or variables for statistical regression. Hence the need of some kind of formats, blueprints or templates that will help to frame rough data into information.
Knowledge must be built from accurate and up-to-date information regarding external and internal state of affairs, and for that purpose information items must be managed according to their source, nature, life-cycle, and relevancy:
Source: Government and administrations, NGO, corporate media, social media, enterprises, systems, etc.
Nature: events, decisions, data, opinions, assessments, etc.
Type of anchor: individual, institution, time, space, etc.
Life-cycle: instant, time-related, final.
Relevancy: traceability with regards to business objectives, business operations, organization and systems management.
On that basis, knowledge management will have to map knowledge to its information footprint in terms of reliability (source, accuracy, consistency, obsolescence, etc) and risks.
From Information to Knowledge
Information is meaningful, knowledge is also useful. As information models, knowledge representations must first be anchored to persistency and execution units in order to support the consistency and continuity of surrogates identities (principle #1). Those anchors are to be assigned to domains managed by single organizational units in charge of ontological commitments, and enriched with structures, features, and associations (principle #2). Depending on their scope, structure or feature, semantics are to be managed respectively by persistent or application domains. Likewise, ontologies may target objects or aspects, the former being associated with structural sub-types, the latter with functional ones. Differences between information models and knowledge representation appear with rules and constraints. While the objective of information and control systems is to manage business objects and activities, the purpose of knowledge systems is to manage symbolic contents independently of their actual counterparts (principle #3). Standard rules used in system modelling describe allowed operations on objects, activities and associated information; they can be expressed forward or backward:
Forward (aka push) rules are conditions on when and how operations are to be performed.
Backward (aka pull) rules are constraints on the consistency of symbolic representations or on the execution of operations.
Assuming a continuity between information and knowledge representations, the inflection point would be marked by the introduction of modalities used to qualified truth values, e.g according temporal and fuzzy logic:
Temporal extensions will put time stamps on truth values of information.
Fuzzy logic put confidence levels on truth values of information.
That is where knowledge systems depart from information and control ones as they introduce a new theory of intelligent reasoning, one based upon the fluidity and volatility of knowledge.
Meanings are in the Hands of Beholders
Seen in a corporate context, knowledge can be understood as information framed by contexts and driven by purposes: how to run a business, how to develop applications, how to manage systems. Hence the dual perspective: on one hand information is governed by enterprise concerns, systems functionalities, and platforms technology; on the other hand knowledge is driven by business processes, systems engineering, and services management.
That provides a clear and comprehensive taxonomy of artifacts, to be used to build knowledge from lower layers of information and data:
Business analysts have to know about business domains and activities, organization and applications, and quality of service.
System engineers have to know about projects, systems functionalities and platform implementations.
System managers have to know about locations and operations, services, and platform deployments.
The dual perspective also points to the dynamics of knowledge, with information being pushed by the their sources, and knowledge being pulled by their users.
A Time for Every Purpose
As understood by Cybernetics, enterprises are viable systems whose success depends on their capacity to countermand entropy, i.e the progressive downgrading of the information used to govern interactions both within the organization itself and with its environment. Compared to architecture knowledge, which is organized according to information contents, knowledge architecture is organized according to functional concerns and information lifespan, and its objective is to keep internal and external information in synch:
Planning of business objectives and requirements (internal) relative to markets evolutions and opportunities (external).
Assessment of organizational units and procedures (internal) in line with regulatory and contractual environments (external).
Monitoring of operations and projects (internal) together with sales and supply chains (external).
That put meanings (that would be knowledge) in the hands of decision makers, respectively for corporate strategy, organization, and operations. Moreover, enterprises being living entities, lifespan and functional sustainability are meant to coalesce into consistent and homogenous layers:
Enterprise (aka business, aka strategic) time-scales are defined by environments, objectives, and investment decisions.
Organization (aka functional) time-scales are set by availability, versatility, and adaptability of resources
Operational time-scales are determined by process features and constraints.
Such a congruence of time-scales, architectures and purposes into Shearing Layers is arguably a key success factor of Knowledge management.
Search and Stretch
As already noted, knowledge is driven by purposes, and purposes, not being confined to domains or preserves, are bound to stretch knowledge across business contexts and organizational boundaries. That can be achieved through search, logic, and classification.
Searches collect the information relevant to users concerns (1). That may satisfy all the knowledge needs, or provide a backbone for further extension.
Searches can be combined with ontologies (aka classifications) that put the same information under new lights (1b).
Truth-preserving operations using mathematics or formal languages can be applied to produce derived information (2).
Finally, new information with reduced confidence levels can be produced through statistical processing (3,4).
For instance, observed traffic at toll roads (1) is used for accounting purposes (2), to forecast traffic evolution (3), to analyze seasonal trends (1b) and simulate seasonal and variable tolls (4).
Those operations entail clear consequences for knowledge management: As far as computational distances don’t affect confidence levels, truth-preserving operations are neutral with regard to KM. Classifications are symbolic tools designed on purpose; as a consequence all knowledge associated to a classification should remain under the responsibility of its designer. Challenges arise when confidence levels are affected, either directly or through obsolescence. And since decision-making is essentially about risks management, dealing with partial or unreliable information cannot be avoided. Hence the importance of managing knowledge along shearing layers, each with its own information life-cycle, confidence requirements, and decision-making rules.
From Knowledge Architecture to Architecture Capability
Knowledge architecture is the corporate central nervous system, and as such it plays a primary role in the support of operational and managerial processes. That point is partially addressed by Frameworks like Zachman whose matrix organizes Information System Architecture (ISA) along capabilities and design levels. Yet, as illustrated by the design levels, the focus remains on information technology without explicitly addressing the distinction between enterprise, systems, and platforms.
That distinction is pivotal because it governs the distinction between corresponding processes, namely business processes, systems engineering, and services managements. And once the distinction is properly established knowledge architecture can be aligned with processes assessment.
Yet that will not be enough now that digital environments are invading enterprise systems, blurring the distinction between managed information assets and the continuous flows of big data.
That puts the focus on two structural flaws of enterprise architectures:
The confusion between data, information, and knowledge.
The intrinsic discrepancy between systems and knowledge architectures.
Both can be overcame by merging system and knowledge architectures applying the Pagoda blueprint:
The alignment of platforms, systems functionalities, and enterprise organization respectively with data (environments), information (symbolic representations), and knowledge (business intelligence) would greatly enhance the traceability of transformations induced by the immersion of enterprises in digital environments.
Knowledge Representation & Profiled Ontologies
Faced with digital business environments, enterprise must sort relevant and accurate information out of continuous and massive inflows of data. As modeling methods cannot cope with the open range of contexts, concerns, semantics, and formats, looser schemes are needed, that’s precisely what ontologies are meant to do:
Thesaurus: ontologies covering terms and concepts.
Documents: ontologies covering documents with regard to topics.
Business: ontologies of relevant enterprise organization and business objects and activities.
Engineering: symbolic representation of organization and business objects and activities.
Ontologies: Purposes & Targets
Profiled ontologies can then be designed by combining that taxonomy of concerns with contexts, e.g:
Institutional: Regulatory authority, steady, changes subject to established procedures.
Professional: Agreed upon between parties, steady, changes subject to accords.
Corporate: Defined by enterprises, changes subject to internal decision-making.
Social: Defined by usage, volatile, continuous and informal changes.
Personal: Customary, defined by named individuals (e.g research paper).
Last but not least, external (regulatory, businesses, …) and internal (i.e enterprise architecture) ontologies could be integrated, for instance with the Zachman framework:
Ontologies, capabilities (Who,What,How, Where, When), and architectures (enterprise, systems, platforms).
Using profiled ontologies to manage enterprise architecture and corporate knowledge will help to align knowledge management with EA governance by setting apart ontologies defined externally (e.g regulations), from the ones set through decision-making, strategic (e.g plate-form) or tactical (e.g partnerships).
Set between all-inclusive onslaught of data on one side, pervasive smart bots on the other side, information systems could lose their identity and purpose. And there is a good reason for that, namely the confusion between data, information, and knowledge.
As it happened aeons ago, ontologies have been explicitly though up to deal with that issue.
Reuse of development artifacts can come by design or as an afterthought. While in the latter case artifacts may have been originally devised for specific contexts and purposes, in the former case they would have been originated by shared concerns and designed according architectural constraints and mechanisms.
Architectures for their part are about stable and sound assets and mechanisms meant to support activities which, by nature, must be adaptable to changing concerns. That is precisely what reusable assets should be looking for, and that may clarify the rationale supporting models and languages:
Why models: to describe shared (i.e reused) artifacts along development processes.
Why non specific languages: to support the sharing of models across business domains and organizational units.
Why model layers: to manage reusable development assets according architectural concerns.
Reuse Perspective: Business Domains vs Development Artifacts
Domain models describe business objects and processes independently of the way they are supported by systems.
Development models describe how to design and implement system components.
As illustrated by agile methods and domain specific languages, that distinction can be ignored when applications are self-contained and projects ownership is shared. In that case reusable assets are managed along business domains, functional architectures are masked, and technical ones are managed by development tools.
Otherwise, reusable assets would be meaningless, even counterproductive, without being associated with clearly defined objectives:
Domain models and business processes are meant to deal with business objectives, for instance, how to assess insurance premiums or compute missile trajectory.
System functionalities lend a hand in solving business problems. Use cases are widely used to describe how systems are to support business processes, and system functionalities are combined to realize use cases.
Platform components provide technical solutions as they achieve targeted functionalities for different users, within distributed locations, under economic constraints on performances and resources.
Whatever the basis, design or afterthought, reusing an artifact comes as a solution to a specific problem: how to support business requirements, how to specify system functionalities, how to implement system components. Describing problems and solutions along architecture layers should therefore be the backbone of reusable assets management.
Computation independent models (CIMs) describe business objects and processes independently of the way they are supported by system functionalities. Contents are business specific that can be reused when functional architectures are modified (a). Business specific contents (e.g business rules) can also be reused when changes do not affect functional architectures and may therefore be directly applied to platform specific models (c).
Platform independent models (PIMs) describe system functionalities independently of supporting platforms. They are reused to design new supporting platforms (b).
Platform specific models (PSMs) describe software components. They are used to implement software components to be deployed on platforms (d).
Not by chance, invariants within model layers can also be associated with corresponding architectures:
Enterprise architecture (as described by CIMs) deals with objectives, assets and organization associated with the continuity of corporate identity and business capabilities within a regulatory and market environment.
Functional architecture (as described by PIMs) deals with the continuity of systems functionalities and mechanisms supporting business processes.
Technical architecture (as described by PSMs) deals with the feasibility, interoperability, efficiency and economics of systems operations.
That makes architecture invariants the candidates of choice for reusable assets.
Enterprise Architecture Assets
Systems context and purposes are set by enterprise architecture. From an engineering perspective reusable assets (aka knowledge) must include domains, business objects, activities, and processes.
Domains are used to describe the format and semantics of elementary features independently of objects and activities.
Business objects identity and consistency must be maintained along time independently of supporting systems. That’s not the case for features and rules which can be modified or extended.
Activities (and associated roles) describe how business objects are to be processed. Semantics and records have to be maintained along time but details of operations can change.
Business processes and events describe how activities are performed.
As far as enterprise architecture is concerned, structure and semantics of reusable assets should be described independently of system modeling methods.
Structures can be unambiguously described with standard connectors for composition, aggregation and reference, and variants by subsets and power-types, both for static and dynamic partitions.
With regard to business activities, semantics are set by targets:
Processing of physical objects.
Processing of notional objects.
Processing of events.
Control of processes execution.
Regarding business objects, semantics are set by what is represented:
State of physical objects.
State of notional object.
History of roles.
Enterprise assets are managed according identification, structure, and semantics, as defined along a business perspective. When reused as development artifacts the same attributes will have to be mapped to an engineering perspective.
Use Cases: A bridge between Enterprise and System Architectures
Systems are supposed to support the continuity and consistency of business processes independently of platforms technologies. For that purpose two conditions must be fulfilled:
Identification continuity of business domains: objects identities are kept in sync with their system representations all along their life-cycle, independently of changes in business processes.
Semantic continuity of functional architectures: the history of system representations can be traced back to associated business operations.
Hence, it is first necessary to anchor requirements objects and activities to persistency and functional execution units.
Once identities and semantics are properly secured, requirements can be analyzed along standard architecture levels: boundaries (transient objects, local execution), controls (transient objects, shared execution), entities (persistent objects, shared execution).
The main objective at this stage is to identify shared functionalities whose specification should be factored out as candidates for reuse. Three criteria are to be considered:
System boundaries: no reusable assets can stand across systems boundaries. For instance, were billing outsourced the corresponding activity would have to be hid behind a role.
Architecture level: no reusable assets can stand across architecture levels. For instance, the shared operations for staff interface will have to be regrouped at boundary level.
Coupling: no reusable asset can support different synchronization constraint. For instance, checking in and out are bound to external events while room updates and billing are not.
It’s worth to note that the objectives of requirements analysis do not depend on the specifics of projects or methods:
Requirements are to be anchored to objects identities and activities semantics either through use cases or directly.
Functionalities are to be consolidated either within new requirements and/or with existing applications.
The Cases for Reuse
As noted above, models and non specific languages are pivotal when new requirements are to be fully or partially supported by existing system functionalities. That may be done by simple reuse of current assets or may call for the consolidation of existing and new artifacts. In any case, reusable assets must be managed along system boundaries, architecture levels, and execution coupling.
For instance, a Clean Room use case goes like: the cleaning staff manages a list of rooms to clean, checks details for status, cleans the room (non supported), and updates lists and room status.
Its realization entails different kinds of reuse:
Existing persistency functionality, new business feature: providing a cleaning status is added to the Room entity, Check details can be reused directly (a).
Consolidated control functionality and delegation: a generic list manager could be applied to customers and rooms and used by cleaning and reservation use cases (b).
Specialized boundary functionality: staff interfaces can be composed of a mandatory header with optional panels respectively for check I/O and cleaning (c).
Reuse and Functional Architecture
Once business requirements taken into account, the problem is how to reuse existing system functionalities to support new functional requirements. Beyond the various approaches and terminologies, there is a broad consensus about the three basic functional levels, usually labelled as model, view, controller (aka MVC):
Model: shared and a life-cycle independent of business processes. The continuity and consistency of business objects representation must be guaranteed independently of the applications using them.
Control: shared with a life-cycle set by a business process. The continuity and consistency of representations is managed independently of the persistency of business objects and interactions with external agents or devices.
View: what is not shared with a life-cycle set by user session. The continuity and consistency of representations is managed locally (interactions with external agents or devices independently of targeted applications.
Service: what is shared with no life-cycle.
Assuming that functional assets are managed along those levels, reuse can be achieved by domains, delegation, specialization, or generalization:
Semantic domains: shared features (addresses, prices, etc) should reuse descriptions set at business level.
Delegation: part of a new functionality (+) can be supported by an existing one (=).
Specialization: a new functionality is introduced as an extension (+) of an existing one (=).
Generalization: a new functionality is introduced (+) and consolidated with existing ones (~) by factoring out shared features (/).
It must be noted that while reuse by delegation operates at instance level and may directly affect coupling constraints on functional architectures, that’s not the case for specialization and generalization which are set at type level and whose impact can be dealt with by technical architectures.
Single-Responsibility Principle (SRP) : software artifacts should have only one reason to change.
Open-Closed Principle (OCP) : software artifacts should be open for extension, but closed for modification.
Liskov Substitution Principle (LSP): Subtypes must be substitutable for their base types. In other words a given set of instances must be equally mapped to types whatever the level of abstraction.
Dependency-Inversion principle (DIP): high level functionalities should not depend on low level ones. Both should depend on abstract interfaces.
Interface-Segregation Principle (ISP): client software artifacts should not be forced to depend on methods that they do not use.
Reuse by Delegation
Delegation should be considered when different responsibilities are mixed that could be set apart. That will clearly foster more cohesive responsibilities and may also bring about abstract (i.e functional) descriptions of low level (i.e technical) operations.
Reuse may be actual (the targeted asset is already defined) or forthcoming (the targeted asset has to be created). Service Oriented Architectures are the archetypal realization of reuse by delegation.
Since it operates at instance level, reuse by delegation may overlap functional layers and therefore introduce coupling constraints on data or control flows that could not be supported by targeted architectures.
Reuse by Specialization
Specialization is to be considered when a subset of objects has some additional features. Assuming base functionalities are not affected, specialization fulfills the open-closed principle. And being introduced for a subset of the base population it will also guarantee the Liskov substitution principle.
Reuse may be actual (a base type already exists) or forthcoming (base and subtype are created simultaneously).
Since it operates at type level, reuse by specialization is supposed to be dealt with by technical architectures. As a corollary, it should not overlap functional layers.
Reuse by Generalization
Generalization should be considered when different sets of objects share a subset of features. Contrary to delegation and specialization, it does affect existing functionalities and may therefore introduce adverse outcomes. While pitfalls may be avoided (or their consequences curbed) for boundary artifacts whose execution is self-contained, that’s more difficult for control and persistency ones, which are meant to support multiple execution within shared address spaces.
When artifacts are used to create transient objects run in self-contained contexts, generalization is straightforward and the factoring out of shared features (a) will clearly further artifacts reuse .
Yet, through its side-effects, generalization may also undermine the design of the whole, for instance:
The open-closed principle may be at risk because when part of a given functionality is factored out, its original semantics are meant to be modified in order to be reused by siblings. That would be the case if authorize() was to be modified for initial screen subtypes as a consequence of reusing the base screen for a new manager screen (b).
Reuse by generalization may also conflict with single-responsibility and interface-segregation principles when a specialized functionality is made to reuse a base one designed for its new siblings. For instance, if the standard reservation screen is adjusted to make room for manager screen it may take into account methods specific to managers (c).
Those problems may be compounded when reuse is applied to control and persistency artifacts: when a generic facility handler and the corresponding record are specialized for a new reservation targeting cars, they both reuse instantiation mechanisms and methods supporting multiple execution within shared address spaces; that is not the case for generalization as the new roots for facility handler and reservation cannot be achieved without modifying existing handler and recording of room reservations.
Since reuse through abstraction is based on inheritance mechanisms, that’s where the cases for reuse are to be examined.
Reuse by Inheritance
As noted above, reuse by generalization may undermine the design of boundaries, control, and persistency artifacts. While risks for boundaries are by nature local and limited to static descriptions, at control and persistency layers they affect instantiation mechanisms and shared execution at system level. And those those pitfalls can be circumscribed by a distinction between objects and aspects.
Object types describe set of identified instances. In that case reuse by generalization means that objects targeted by new artifact must be identified and structured according the base descriptions whose reuse is under consideration. From a programming perspective object types will be eventually implemented as concrete classes.
Aspect types describe behaviors or functionalities independently of the objects supporting them. Reuse of aspects can be understood as inheritance or composition. From a programming perspective they will be eventually implemented as interfaces or abstract classes.
Unfettered by programming languages constraints, generalization can be given consistent and unambiguous semantics. As a consequence, reuse by generalization can be introduced selectively to structures and aspects, with single inheritance for the former, multiple for the latter.
Not by chance, that distinction can be directly mapped to the taxonomy of design patterns proposed by the Gang of Four:
Creational designs deal with the instanciation of objects.
Structural designs deal with the building of structures.
Behavioral designs deal with the functionalities supported by objects.
Applied to boundary artifacts, the distinction broadly coincides with the one between main windows (e.g Java Frames) on one hand, other graphical user interface components on the other hand, with the former identifying users sessions. For example, screens will be composed of a common header and specialized with components for managers and staffs. Support for reservation or cleaning activities will be achieved by inheriting corresponding aspects.
Freed from single inheritance constraints, the granularity of functionalities can be set independently of structures. Combined with selective inheritance, that will directly benefit open-closed, single-responsibility and interface-segregation principles.
The distinction between identifying structures on one hand, aspects on the other hand, is still more critical for artifacts supporting control functionalities as they must guarantee multiple execution within shared address spaces. In other words reuse of control artifacts should first and foremost be about managing identities and conflicting behaviors. And that can be best achieved when instantiation, structures, and aspects are designed independently:
Whatever the targeted facility, a session must be created for, and identified by, each user request (#). Yet, since reservations cannot be processed independently, they must be managed under a single control (aka authority) within a single address space.
That’s not the case for the consultation of details which can therefore be supported by artifacts whose identification is not bound to sessions.
Extensions, e.g for flights, will reuse creation and identification mechanisms along strong (binding) inheritance links; generalization will be safer as it will focus on clearly defined operations. Reuse of aspects will be managed separately along weak (non binding) inheritance links.
Reuse of control artifacts through selective inheritance may be especially useful with regard to dependency-inversion principle as it will facilitate the distinction between policy, mechanism, and utility layers.
Regarding artifacts supporting persistency, the main challenge is about domains consistency, best addressed by the Liskov substitution principle. According to that principle, a given set of instances should be equivalently represented independently of the level of abstraction. For example, the same instances of facilities should be represented identically as such or according their types. Clearly that will not be possible with overlapping subsets as the number of instances will differ depending on the level of abstraction.
But taxonomies being business driven, they usually overlap when the same objects are targeted by different business domains, as could be the case if reservations were targeting transport and lodging services while facility providers were managing actual resources with overlapping services. With selective inheritance it will be possible to reuse aspects without contradicting the substitution principle.
Reuse across Functional Architecture Layers
Contrary to reuse by delegation, which relates to instances, reuse by abstraction relates to types and should not be applied across functional architecture layers lest it would break the separation of concerns. Hence the importance of the distinction between reuse of structures, which may impact on identification, and the reuse of aspects, which doesn’t.
Given that reuse of development artifacts is to be governed along architecture levels (enterprise, system functionalities, platform technologies) on one hand, and functional layers (boundaries, controls, persistency) on the other hand, some principles must be set regarding eligible mechanisms.
Two mechanisms are available for type reuse across architecture levels:
Semantics domains are defined by enterprise architecture and can be directly reused by functionalities.
Design patterns enable the transformation of functional assets into technical ones.
Otherwise reuse policies must follow functional layers:
Base entities are first anchored to business objects (1), with possible subsequent specialization (1b). Generalization must distinguish between structures and aspects lest to break continuity and consistency of representations.
Base controls are anchored to business activities and may reuse entities (2). They may be specialized (2b). Generalization must distinguish between structures and aspects lest to break continuity and consistency of business processes.
Base boundaries are anchored to roles and may reuse controls (3). They may be specialized (3b). Generalization must distinguish between structures and aspects lest to break continuity and consistency of sessions.
Requirements are not manna from heaven, they do not come to the world as models. So, what is the starting point, the primary input ? According to John, “In the beginning was the word …”, but Gabriel García Márquez counters that at the beginning “The world was so recent that many things lacked names, and in order to indicate them it was necessary to point. ”
Requirements capture is the first step along project paths, when neither words nor things can be taken for granted: names may not be adequately fixed to denoted objects or phenomena, and those ones being pointed at may still be anonymous, waiting to be named.
Confronted with lumps of words, assertions and rules, requirements capture may proceed with one of two basic options: organize requirements around already known structuring objects or processes, or listen to user stories and organize requirements alongside. In both cases the objective is to spin words into identified threads (objects, processes, or stories) and weave them into a fabric with clear and non ambiguous motifs.
From Stories to Models
Requirements capture epitomizes the transition from spoken to written languages as its objective is to write down user expectations using modeling languages. Just like languages in general, such transitions can be achieved through either alphabetical of logographic writing systems, the former mapping sounds (phonemes) to signs (glyphs), the latter setting out from words and mapping them to symbols associated with archetypal meanings; and that is precisely what models are supposed to do.
As demonstrated by Kanji, logographic writing systems can support different spoken languages providing they share some cultural background. That is more or less what is at stake with requirements capture: tapping requirements from various specific domains and transform them into functional requirements describing how systems are expected to support business processes. System functionalities being a well circumscribed and homogeneous background, a modeling framework supporting requirements capture shouldn’t be out of reach.
Getting the right stories
If requirements are meant to express actual business concerns grounded in the here and now of operations, trying to apprehend them directly as “conceptual” models would negate the rationale supporting requirements capture. User stories and use cases help to prevent such misgivings by rooting requirements in concrete business backgrounds of shared references and meanings.
Yet, since the aim of requirements is to define how system functionalities will support business processes, it would help to get the stories and cases right upfront, in other words to organize them according patterns of functionalities. Taking a cue from the Gang of Four, three basic categories should be considered:
Creational cases or storiesdeal with the structure and semantics of business objects whose integrity and consistency has to be persistently maintained independently of activities using them. They will govern objects life-cycle (create and delete operations) and identification mechanisms (fetch operations).
Structural cases or stories deal with the structure and semantics of transient objects whose integrity and consistency has to be maintained while in use by activities. They will govern features (read and update operations) and target aspects and activities rooted (aka identified) through primary objects or processes.
Behavioral cases or stories deal with the ways objects are processed.
Not by chance, those categories are consistent with the Object/Aspects perspective that distinguish between identities and objects life-cycle on one hand, features and facets on the other hand. They are also congruent with the persistent (non-transactional)/transient (transactional) distinction, and may also be mapped to CRUD matrices.
Since cases and stories will often combine two or three basic categories, they should be structured accordingly and reorganized as to coincide with the responsibilities on domains and projects defined by stakeholders.
Other than requirements templates, user stories and use cases are two of the preferred methods for capture requirements. Both put the focus on user experience and non formal descriptions, with use cases focusing at once on interactions between agents and systems, and user stories introducing them along the course of refinements. That make them complementary:
Use cases should be the method of choice when new functionalities are to be added to existing systems.
User stories would be more suited to standalone applications but may also be helpful to single out use cases success scenarii.
Depending on circumstances it may be easier to begin requirements capture with a success story (green lines) and its variants or with use cases (red characters) with some activities already defined.
Combining user stories and use cases for requirement capture may also put the focus on system footprint, setting apart the activities to be supported by the system under consideration. On a broader perspective, that may help to position requirements along architecture layers: user stories arise from business processes set within enterprise architecture, use cases are supported by functional architecture.
Spinning the Stories
Given that the aim of requirements is to define how systems will support processes execution and objects persistency, a sound policy should be to characterize those anchors meant to be targeted by requirements nouns and verbs. That may be achieved with basic parsing procedures:
Nouns and verbs are set apart and associated to candidates archetypes for physical or symbolic object, physical or symbolic activity, corresponding container, event, or role.
Among them business concerns should point to managed individuals, i.e those anchors whose instances must be consistently identified by business processes.
Finally business rules will be used to define features whose values are to be managed at instances level.
Parsing nondescript requirements for anchors will set apart a distinctive backbone of clear and straight threads on one hand, a remainder of rough and tousled features and rules on the other hand.
Fleshing the Stories out
Archetypes are like clichés, they may support a story but cannot make it. So it goes with requirements whose meaning is to be found into the intricacy of features and business rules.
However tangled and poorly formulated, rules provide the substance of requirements as they express the primary constraints, needs and purposes. That jumble can usually be reshaped in different ways depending on perspective (business or functional requirements), timing constraints (synchronous or asynchronous) or architectural contexts; as a corollary, the way rules are expressed will have a significant impact on the functional architecture of the system under consideration.
If transparency and traceability of functional arbitrages are to be supported, the configuration of rules has to be rationalized from requirements inception. Just like figures of speech help oral storytelling, rules archetypes may help to sort out syntax from semantics, the former tied to rules themselves, the latter derived from their targets. For instance, constraints on occurrences (#), collections (*) or partitions (2) should be expressed uniformly whatever their target: objects, activities, roles, or events.
As a consequence, and to all intents and purposes, rules analysis should not only govern requirements capture, it should also shadow iterations of requirements analysis, each cycle circumscribed by the consolidation of anchors:
Single responsibility for rule implementation: project, architecture or services, users.
Category: whether a rule is about life-cycle, structure, or behavior.
Scope: whether enforcement is transient of persistent.
Coupling: rules triggered by, or bringing change to, contexts must be set apart.
Control: whether enforcement has to be monitored in real-time.
Power-types and extension points: all variants should be explicitly associated to a classification or a branching rule.
Subsidiarity: rules ought to be handled at the lowest level possible: system, domain, collection, component, feature.
Pricing the Stories
One of the primary objectives of requirements is to put a price on the system under consideration and to assess its return on investment (ROI). If that is straightforward for hardware and off-the-shelf components, things are not so easy for software developments whose metrics are often either pragmatic but specific, or inclusive but unreliable.
Putting aside approaches based on programs size, both irrelevant for requirements assessment and discredited as development metrics, requirements can be assessed using story or function points:
Story points conduct pragmatic assessments of self-contained stories. They are mostly used locally by project teams to estimate their tasks and effort.
Functional metrics are more inclusive as based on principled assessment of archetypal system functionalities. Yet they are mostly confined to large organizations and their effectiveness and reliability highly depends on expertise.
Whereas both approaches start with user expectations regarding system support, their rationale is different: function points (FPs) apply to use cases and take into account the functionalities supported by the system; story points (SPs) apply to user stories and their scope is by definition circumscribed. That difference may be critical when categories are considered: points for behavioral, structural and creational stories should be weighted differently.
Yet, when requirements capture is supported both by stories and use cases, story and functions points can be combined to obtain functional size measurements:
Story points are used to assess business contents (aka application domain) based on master data (aka persistent) entities, activities, and their respective power-types.
Use case points target the part played by the system, based on roles and coupling constraints defined by active objects, events, and controlling processes.
Non adjusted function points can then be computed by weighting use case function points with the application domain function points corresponding to use case footprint.
“The only reason for time is so that everything doesn’t happen at once.”
Time and Events
Time is what happens between events. Without events there will be no time, and each event introduces its own time-scale.
Time is what persists of events (S.Dali)
From the (limited) perspective of system requirements, time is the scale used to position business processes, i.e the execution of business activities. And since systems are meant to support business processes, time must be set by business events before being measured by shared devices.
As a counter intuitive corollary, time is only to run with business: if nothing relevant is meant to happen between two events there is no need of time and both events can be coalesced into a single one. Conversely, if nothing is allowed to happen between two events the system will have to dealt with them in real-time.
Like events, time is necessarily local since physical devices must sit on location, possibly mobile ones; once set by some triggering event, time can only be measured by a single clock and therefore cannot be shared across distributed systems.
Time is local but one can take it along
Moreover, along with its measuring device, time is also an artifact built to meet specific concerns. That can be very useful if activities are to be managed in isolation. Like different objects managed within different address spaces, different activities could be time-stamped and controlled by different time scales.
Time scales can be designed on purpose
Being set by independent clocks, each built to answer it own concerns, the symbolic representations of activities could then be fenced off so their execution will appear as impervious to external events. Yet, if and when architectural constraints are to be properly identified, execution units will have to be timed along some shared time scale. More generally, distributed activities, being executed along different time scales, would have to be synchronized through a logical clock using its own time scale. Another solution is to synchronize physical clocks using a standard protocol like the Network Time Protocol (NTP).
Distributed activities have no time of their own
Along that reasoning, real-time activities can only be supported locally: if events must occur simultaneously within and without the system, i.e nothing is supposed to nothing is supposed to happens between changes in actual objects and their symbolic representation, that can only be achieved under the control of a single system.