Book Pick: Information & Entropy

Entropy is the part of relevant data not mapped by the representation.
Excerpt from Enterprise Architecture Fundamentals:


Entropy in systems can be understood in terms of disorder and randomness; for enterprise systems, it means the misalignment of observed data with the expected values, given the categories managed by the enterprise (e.g., are VIP customers meeting sales managers’ expectations?). That understanding of entropy can be neatly aligned with the symbolic architecture of the Pagoda blueprint (cf. chapter 9): data for disorder (environment), information for categories (systems), and knowledge for their managed alignment (enterprise).

It is thus possible to more precisely assess enterprise architecture capacity in terms of entropy as the quantum of data that can be accounted for by enterprises’ symbolic representations (figure 15-4):

Figure 15-4. Information Systems & Entropy

The vertical axis represents the quantum of information supposedly contained in a given set of targeted items (e.g., demographic data from a population sample). The horizontal one represents the size and complexity of the categories supporting the representations (e.g., descriptive models), from an all-inclusive classification made up of single category (x0) to a bijective mapping of each observation (or fact) to a specific category (x2). Given an overall quantum of hypothetical information (grey color), the function y represents the part (blue color) of that quantum that can be accounted for, depending on the complexity of the categories used by the representations. This complexity ranges between the equally pointless options of full abstraction (everything in one concept) and full detail (a concept for everything).

Entropy is thus implicitly defined as the difference between the part accounted for and the overall quantum of information, such that entropy is at the hypothetical minimum when the part accounted for is at its maximum (x1 . y1).


(From Chapter 15)
%d bloggers like this: