Repeated announces of looming software apocalypse may take some edge off vigilance, but repeated systems failures should be taken seriously, if only because they appear to be rooted in a wide array of causes, from wrongly valued single parameters (e.g 911 threshold or Apple’s free pass for “root” users) to architecture obsolescence (e.g reservation systems.)
Yet, if alarms are not to be ignored, prognoses should go beyond syndromes and remedies beyond sticking plaster: contrary to what is suggested by The Atlantic’s article, systems are much more than piles of code, and programming is probably where quality has been taken the most seriously.
Programs vs Systems
Whatever programmers’ creativity and expertise, they cannot tackle complexity across space, time, and languages: today’s systems are made of distributed interacting components, specified and coded in different languages, and deployed and modified across overlapping time-frames. Even without taking into account continuous improvements in quality, apocalypse is not to loom in the particulars of code but on their ways in the world.
Solutions should therefore be looked for at system level, and that conclusion can only be bolstered by the ubiquity of digitized business flows.
Systems are the New Babel
As illustrated by the windfalls benefiting Cobol old timers, language is arguably a critical factor, for the maintenance of legacy programs as well as for communication between stakeholders, users, and engineers.
So if problems can be traced back to languages, it’s where solutions are to be found: from programming languages (for code) to natural ones (for systems requirements), everything can be specified as symbolic representations, i.e models.
Model in the Loop
Models are generally understood as abstractions, and often avoided for that very reason. That shortsighted mind-set is made up for by concrete employs of abstractions, as illustrated by the Automotive industry and the way it embeds models in engineering processes.
Summarily, the Automotive’s Model in Loop (MiL) can be explained through three basic ideas:
- Systems are to be understood as the combination of physical and software artifacts.
- Insofar as both can be implemented as digits, they can be uniformly described as models.
- As a consequence, analysis, design and engineering can be carried out through the iterative building, simulating, testing, and adjusting various combinations of hardware and software.
By bringing together physical components and code into a seamless digitized whole, MiL brings down the conceptual gap between actual elements and symbolic representations, aka models. And that leap could be generalized to a much wider range of systems.
Models are the New Code
Programming habits and the constraints imposed by the maintenance of legacy systems have perpetuated the traditional understanding of systems as a building-up of programs; hence the focus put on the quality of code. But when large, distributed, and perennial systems are concerned, that bottom-up mind-set falls short and brings about:
- An exponential increase of complexity at system level.
- Opacity and discontinued traceability at application level between current use and legacy code.
Both flaws could be corrected by combining top-down modeling and bottom-up engineering. That could be achieved with iterative processes carried out from both directions.
Model in the Loop meets Enterprise Architecture
From a formal perspective models are of two sorts: extensional ones operate bottom-up and associate sets of individuals with categories, intensional ones operate top-down and specify the features meant to be shared by all instances of a type. based on that understanding, the former can be used to simulate the behaviors of targeted individuals depending on categories, and the latter to prescribe how to create instances of types meant to implement categories.
As it happens, Model-in-loop combines the two schemes at component level:
- Any combination of manual and automated solution can be used as a starting point for analysis and simulation (a).
- Given the outcomes of simulation and tests, the architecture is revisited (b) and corresponding artifacts (software and hardware) are designed (c).
- The new combination of artifacts are developed and integrated, ready for analysis and simulation (d).
Assuming that MiL bottom-up approach could be tallied with top-down systems engineering processes, it would enable a seamless and continuous integration of changes in software components and systems architectures.
- Digital Hybrids
- The Book of Fallacies
- Models as Parachutes
- Views, Models, & Architectures
- EA: Work Units & Workflows
- Legacy Refactoring
- Modernization & The Archaeology of Software
- “The Coming Software Apocalypse”, James Somers, The Atlantic Sept 2017
- “Model-based Testing of Automotive Systems”, Eckard Bringmann, Andreas Krämer