Self-driving Cars & Turing’s Imitation Game

Self-driving vehicles should behave like humans, here is how to teach them so.

3f36d-custer1

Preamble

The eventuality of sharing roads with self-driven vehicles raises critical issues, technical, social, or ethical. Yet, a dual perspective (us against them) may overlook the question of drivers’ communication (and therefore behavior) because:

  • Contrary to smart cars, human drivers don’t use algorithms.
  • Contrary to humans, smart cars are by nature unethical.

If roads are to become safer when shared between human and self-driven vehicles, enhancing their collaboration should be a primary concern.

Driving Is A Social Behavior

The safety of roads has more to do with social behaviors than with human driving skills, as it depends on human ability, (a) to comply with clearly defined rules and, (b) to communicate if and when rules fail to deal with urgent and exceptional circumstances. Given that self-driving vehicles will have no difficulty with rules compliance, the challenge is their ability to communicate with other drivers, especially human ones.

What Humans Expect From Other Drivers

Social behavior of human drivers is basically governed by clarity of intent and self-preservation:

  1. Clarity of intent: every driver expects from all protagonists a basic knowledge of the rules, and the intent to follow the relevant ones depending on circumstances.
  2. Self-preservation: every driver implicitly assumes that all protagonists will try to preserve their physical integrity.

As it happens, these assumptions and expectations may be questioned by self-driving cars:

  1. Human drivers wouldn’t expect other drivers to be too smart with their interpretation of the rules.
  2. Machines have no particular compunction with their physical integrity.

Mixing human and self-driven cars may consequently induce misunderstandings that could affect the reliability of communications, and so the safety of the roads.

Why Self-driving Cars Have To Behave Like Human Drivers

As mentioned above, driving is a social behavior whose safety depends on communication. But contrary to symbolic and explicit driving regulations, communication between drivers is implicit by necessity; if and when needed, it is in urgency because rules are falling short of circumstances: communication has to be instant.

So, since there is no time for interpretation or reasoning about rules, or for the assessment of protagonists’ abilities, communication between drivers must be implicit and immediate. That can only be achieved if all drivers behave like human ones.

Turing’s Imitation Game Revisited

Alan Turing designed his Imitation Game as a way to distinguish between human and artificial intelligence. For that purpose a judge was to interact via computer screen and keyboard with two anonymous “agents”, one human and one artificial, and to decide which was what.

Extending the principle to drivers’ behaviors, cars would be put on the roads of a controlled environment, some driven by humans, others self-driven. Behaviors in routine and exceptional circumstances would be recorded and analyzed for drivers and protagonists.

Control environments should also be run, one for human-only drivers, and one with drivers unaware of the presence of self-driving vehicles.

Drivers’ behaviors would then be assessed according to the nature of protagonists:

DeciMakingTaxo

  • H / H: Should be the reference model for all driving behaviors.
  • H / M: Human drivers should make no difference when encountering self-driving vehicles.
  • M / H: Self-driving vehicles encountering human drivers should behave like good human drivers.
  • Ma / Mx: Self-driving vehicles encountering self-driving protagonists and recognizing them as such could change their driving behavior providing no human protagonists are involved.
  • Ma / Ma: Self-driving vehicles encountering self-driving protagonists and recognizing them as family related could activate collaboration mechanisms providing no other protagonists are involved.

Such a scheme could provide the basis of a driving licence equivalent for self-driving vehicles.

Self-driving Vehicles & Self-improving Safety

If self-driving vehicles have to behave like humans and emulate their immediate reactions, they may prove exceptionally good at it because imitation is what machines do best.

When fed with data about human drivers behaviors, deep-learning algorithms can extract implicit knowledge and use it to mimic human behaviors; and with massive enough data inputs, such algorithms can be honed to statistically perfect similitude.

That could set the basis of a feedback loop:

  1. A limited number of self-driving vehicles (properly fed with data) are set to learn from communicating with human drivers.
  2. As self-driving vehicles become better at the imitation game their number can be progressively increased.
  3. Human behaviors improve influenced by the growing number of self-driving vehicles, which adjust their behavior in return.

That is to create a virtuous feedback for roads safety.

PS: A Contrary Evidence

A trial in Texas demonstrates a contrario the potency of the argument by adopting the alternative policy, making clear that self-driving vehicles are indeed machines. Apart for being introduced as a public transport within a defined area and designed stops, two schemes are used to inhibit the imitation game:

  • Appearance: design and color enable immediate recognition.
  • Communication: external screens are used for textual (as opposed to visual) notification to pedestrians and other vehicles.

Future will tell if that policy is just a tactical step or a more significant shift towards a functional distinction between artificial brains and humans ones.

Further Reading

External Links

 

A Brief Ontology Of Time

“Clocks slay time… time is dead as long as it is being clicked off by little wheels; only when the clock stops does time come to life.”

William Faulkner

Preamble

The melting of digital fences between enterprises and business environments is putting a new light on the way time has to be taken into account.

Joseph_Koudelka_time
Time is what happens between events (Josef Koudelka)

The shift can be illustrated by the EU GDPR: by introducing legal constraints on the notifications of changes in personal data, regulators put systems’ internal events on the same standing as external ones and make all time-scales equal whatever their nature.

Ontological Limit of WC3 Time Recommendation

The W3C recommendation for OWL time description is built on the well accepted understanding of temporal entity, duration, and position:

Cake_time

While there isn’t much to argue with what is suggested, the puzzle comes from what is missing, namely the modalities of time: the recommendation makes use of calendars and time-stamps but ignores what is behind, i.e time ontological dimensions.

Out of the Box

As already expounded (Ontologies & Enterprise Architecture) ontologies are at their best when a distinction can be maintained between representation and semantics. That point can be illustrated here by adding an ontological dimension to the W3C description of time:

  1. Ontological modalities are introduced by identifying (#) temporal positions with regard to a time-frame.
  2. Time-frames are open-ended temporal entities identified (#) by events.
Cake_timeOnto
How to add ontological modalities to time

It must be noted that initial truth-preserving properties still apply across ontological modalities.

Conclusion: OWL Descriptions Should Not Be Confused With Ontologies

Languages are meant to combine two primary purposes: communication and symbolic representation, some (e.g natural, programming) being focused on the former, other (e.g formal, specific) on the latter.

The distinction is somewhat blurred with languages like OWL (Web Ontology Language) due to the versatility and plasticity of semantic networks.

Ontologies and profiles are meant to target domains, profiles and domains are modeled with languages, including OWL.

That apparent proficiency may induce some confusion between languages and ontologies, the former dealing with the encoding of time representations, the latter with time modalities.

Further Readings

External Links

Things Speaking in Tongues

Preamble

Speaking in tongues (aka Glossolalia) is the fluid vocalizing of speech-like syllables without any recognizable association with a known language. Such experience is best (not ?) understood as the actual speaking of a gutted language with grammatical ghosts inhabited by meaningless signals.

The man behind the tongue (Herbert List)
Silent Sounds (Herbert List)

Usually set in religious context or circumstances, speaking in tongue looks like souls having their own private conversations. Yet, contrary to extraterrestrial languages, the phenomenon is not fictional and could therefore point to offbeat clues for natural language technology.

Computers & Language Technology

From its inception computers technology has been a matter of language, from machine code to domain specific. As a corollary, the need to be in speaking terms with machines (dumb or smart) has put a new light on interpreters (parsers in computer parlance) and open new perspectives for linguistic studies. In due return, computers have greatly improved the means to experiment and implement new approaches.

During the recent years advances in artificial intelligence (AI) have brought language technologies to a critical juncture between speech recognition and meaningful conversation, the former leaping ahead with deep learning and signal processing, the latter limping along with the semantics of domain specific languages.

Interestingly, that juncture neatly coincides with the one between the two intrinsic functions of natural languages: communication and representation.

Rules Engines & Neural Network

As exemplified by language technologies, one of the main development of deep learning has been to bring rules engines and neural networks under a common functional roof, turning the former unfathomable schemes into smart conceptual tutors for the latter.

In contrast to their long and successful track record in computer languages, rule-based approaches have fallen short in human conversations. And while these failings have hindered progress in the semantic dimension of natural language technologies, speech recognition have strode ahead on the back of neural networks fed by massive data from social networks and fueled by increasing computing power. But the rift between processing and understanding natural languages is now being fastened through deep learning technologies. And with the leverage of rule engines harnessing neural networks, processing and understanding can be carried out within a single feedback loop.

From Communication to Cognition

From a functional point of view, natural languages can be likened to money, first as medium of exchange, then as unit of account, finally as store of value. Along that understanding natural languages would be used respectively for communication, information processing, and knowledge representation. And like the economics of money, these capabilities are to be associated to phased cognitive developments:

  • Communication: languages are used to trade transient signals; their processing depends on the temporal persistence of the perceived context and phenomena; associated behaviors are immediate (here-and-now).
  • Information: languages are also used to map context and phenomena to some mental representations; they can therefore be applied to scripted behaviors and even policies.
  • Knowledge: languages are used to map contexts, phenomena, and policies to categories and concepts to be stored as symbolic representations fully detached of original circumstances; these surrogates can the be used, assessed, and improved on their own.

As it happens, advances in technologies seem to follow these cognitive distinctions, with the internet of things (IoT) for data communications, neural networks for data mining and information processing, and the addition of rules engines for knowledge representation. Yet paces differ significantly: with regard to language processing (communication and information), deep learning is bringing the achievements of natural language technologies beyond 90% accuracy; but when language understanding has to take knowledge into account, performances still lag a third below: for computers knowledge to be properly scaled, it has to be confined within the semantics of specific domains.

Sound vs Speech

Humans listening to the Universe are confronted to a question that can be unfolded in two ways:

  • Is there someone speaking, and if it’s the case, what’s the language ?.
  • Is that a speech, and if it’s the case, who’s speaking ?.

In both case intentionality is at the nexus, but whereas the first approach has to tackle some existential questioning upfront, the second can put philosophy on the back-burner and focus on technological issues. Nonetheless, even the language first approach has been challenging, as illustrated by the difference in achievements between processing and understanding language technologies.

Recognizing a language has long been the job of parsers looking for the corresponding syntax structures, the hitch being that a parser has to know beforehand what it’s looking for. Parser’s parsers using meta-languages have been effective with programming languages but are quite useless with natural ones without some universal grammar rules to sort out babel’s conversations. But the “burden of proof” can now be reversed: compared to rules engines, neural networks with deep learning capabilities don’t have to start with any knowledge. As illustrated by Google’s Multilingual Neural Machine Translation System, such systems can now build multilingual proficiency from sufficiently large samples of conversations without prior specific grammatical knowledge.

To conclude, “Translation System” may even be self-effacing as it implies language-to-language mappings when in principle such systems can be fed with raw sounds and be able to parse the wheat of meanings from the chaff of noise. And, who knows, eventually be able to decrypt languages of tongues.

Further Reading

External Links

Things Behavior & Social Responsibility

Contrary to security breaks and information robberies that can be kept from public eyes, crashes of business applications or internet access are painfully plain for whoever is concerned, which means everybody. And as illustrated by the last episode of massive distributed denial of service (DDoS), they often come as confirmation of hazards long calling for attention.

robot_waynemiller
Device & Social Identity (Wayne Miller)

Things Don’t Think

To be clear, orchestrated attacks through hijacked (if unaware) computers have been a primary concern for internet security firms for quite some time, bringing about comprehensive and continuous reinforcement of software shields consolidated by systematic updates.

But while the right governing hand was struggling to make a safer net, the other hand thoughtlessly brought in connected objects to a supposedly new brand of internet. As if adding things with software brains cut to the bone could have made networks smarter.

And that’s the catch because the internet of things (IoT) is all about making room for dumb ancillary objects; unfortunately, idiots may have their use for literary puppeteers with canny agendas.

Think Again, or Not …

For old-timers with some memory of fingering through library cardboard, googling topics may have looked like dreams: knowledge at one’s fingertips, immediately and comprehensively. But that vision has never been more than a fleeting glimpse in a symbolic world; in actuality, even at its semantic best, the web was to remain a trove of information to be sifted by knowledge workers safely seated in their gated symbolic world. Crooks of course could sneak in as knowledge workers, armed with fountain pens, but without guns covered by the second amendment.

So, from its inception, the IoT has been a paradoxical endeavor: trying to merge actual and symbolic realms that would bypass thinking processes and obliterate any distinction. For sure, that conundrum was supposed to be dealt with by artificial intelligence (AI), with neural networks and deep learning weaving semantic threads between human minds and networks brains.

Not surprisingly, brainy hackers have caught sight of that new wealth of chinks in internet armour and swiftly added brute force to their paraphernalia.

But in addition to the technical aspect of internet security, the recent Dyn DDoS attack puts the light on its social perspective.

Things Behavior & Social Responsibility

As far as it remained intrinsically symbolic, the internet has been able to carry on with its utopian principles despite bumpy business environments. But things have drastically changed the situation, with tectonic frictions between symbolic and real plates wreaking havoc with any kind of smooth transition to internet.X, whatever x may be.

Yet, as the diagnose is clear, so should be the remedy.

To begin with, the internet was never meant to become the central nervous system of human societies. That it has happened in half a generation has defied imagination and, as a corollary, sapped the validity of traditional paradigms.

As things happen, the epicenter of the paradigms collision can be clearly identified: whereas the internet is built from systems, architectures taxonomies are purely technical and ignore what should be the primary factor, namely what kind of social role a system could fulfil. That may have been irrelevant for communication networks, but is obviously critical for social ones.

Further Reading

External Links

Brands, Bots, & Storytelling

As illustrated by the recent Mashable “pivot”, meaningful (i.e unbranded) contents appear to be the main casualty of new communication technologies. Hopefully (sic), bots may point to a more positive perspective, at least if their want for no no-nonsense gist is to be trusted.

(Latifa Echakhch)
Could bots repair gibberish ? (Latifa Echakhch)

The Mashable Pivot to “branded” Stories

Announcing Mashable recent pivot, Pete Cashmore (Mashable ‘s founder and CEO) was very candid about the motives:

“What our advertisers value most about
 Mashable is the same thing that our audience values: Our content. The
 world’s biggest brands come to us to tell stories of digital culture, 
innovation and technology in an optimistic and entertaining voice. As 
a result, branded content has become our fastest growing revenue 
stream over the past year. Content is now at the core of our ad 
offering and we plan to double down there.

”

Also revealing was the semantic shift in a single paragraph: from “stories”, to “stories told with an optimistic and entertaining voice”, and finally to “branded stories”; as if there was some continuity between Homer’s Iliad and Outbrain’s gibberish.

Spinning Yarns

From Lacan to Seinfeld, it has often been said that stories are what props up our world. But that was before Twitter, Facebook, YouTube and others ruled over the waves and screens. Nowadays, under the combined assaults of smart dummies and instant messaging, stories have been forced to spin advertising schemes, and scripts replaced  by subliminal cues entangled in webs of commercial hyperlinks. And yet, somewhat paradoxically, fictions may retrieve some traction (if not spirit) of their own, reprieved not so much by human cultural thirst as by smartphones’ hunger for fresh technological contraptions.

Apps: What You Show is What You Get

As far as users are concerned, apps often make phones too smart by half: with more than 100 billion of apps already downloaded, users face an embarrassment of riches compounded by the inherent limitations of packed visual interfaces. Enticed by constantly renewed flows of tokens with perfunctory guidelines, human handlers can hardly separate the wheat from the chaff and have to let their choices be driven by the hypothetical wisdom of the crowd. Whatever the outcomes (crowds may be right but often volatile), the selection process is both wasteful (choices are ephemera, many apps are abandoned after a single use, and most are sparely used), and hazardous (too many redundant dead-ends open doors to a wide array of fraudsters). That trend is rapidly facing the physical as well as business limits of a zero-sum playground: smarter phones appear to make for dumber users. One way out of the corner would be to encourage intelligent behaviors from both parties, humans as well as devices. And that’s something that bots could help to bring about.

Bots: What You Text Is What You Get

As software agents designed to help people find their ways online, bots can be differentiated from apps on two main aspects:

  • They reside in the cloud, not on personal devices, which means that updates don’t have to be downloaded on smartphones but can be deployed uniformly and consistently. As a consequence, and contrary to apps, the evolution of bots can be managed independently of users’ whims, fostering the development of stable and reliable communication grammars.
  • They rely on text messaging to communicate with users instead of graphical interfaces and visual symbols. Compared to icons, text put writing hands on driving wheels, leaving much less room for creative readings; given that bots are not to put up with mumbo jumbo, they will prompt users to mind their words as clearly and efficiently as possible.

Each aspect reinforces the other, making room for a non-zero playground: while the focus on well-formed expressions and unambiguous semantics is bots’ key characteristic, it could not be achieved without the benefits of stable and homogeneous distribution schemes. When both are combined they may reinstate written languages as the backbone of communication frameworks, even if it’s for the benefits of pidgin languages serving prosaic business needs.

A Literary Soup of Business Plots & Customers Narratives

Given their need for concise and unambiguous textual messages, the use of bots could bring back some literary considerations to a latent online wasteland. To be sure, those considerations are to be hard-headed, with scripts cut to the bone, plots driven by business happy ends, and narratives fitted to customers phantasms.

Nevertheless, good storytelling will always bring some selective edge to businesses competing for top tiers. So, and whatever the dearth of fictional depth, the spreading of bots scripts could make up some kind of primeval soup and stir the emergence of some literature untainted by its fouled nourishing earth.

Further Readings

Selfies & Augmented Identities

As smart devices and dumb things respectively drive and feed internet advances, selfies may be seen as a minor by-product figuring the scenes between reasoning capabilities and the reality of things. But then, should that incidental understanding be upgraded to a more meaningful one that will incorporate digital hybrids into virtual reality.

Actual and Virtual Representations (N. Rockwell)

Portraits, Selfies, & Social Identities

Selfies are a good starting point given that their meteoric and wide-ranging success makes for social continuity of portraits, from timeless paintings to new-age digital images. Comparing the respective practicalities and contents of traditional and digital pictures  may be especially revealing.

With regard to practicalities, selfies bring democratization: contrary to paintings, reserved to social elites, selfies let everybody have as many portraits as wished, free to be shown at will, to family, close friends, or total unknowns.

With regard to contents, selfies bring immediacy: instead of portraits conveying status and characters through calculated personal attires and contrived settings, selfies picture social identities as snapshots that capture supposedly unaffected but revealing moments, postures, entourages, or surroundings.

Those selfies’ idiosyncrasies are intriguing because they seem to largely ignore the wide range of possibilities offered by new media technologies which could (and do sometimes) readily make selfies into elaborate still lives or scenic videos.

Likewise is the fading-out of photography as a vector of social representation after the heights achieved in the second half of the 19th century: not until the internet era did photographs start again to emulate paintings as vehicles of social identity.

Those changing trends may be cleared up if mirrors are introduced in the picture.

Selfies, Mirrors, & Physical Identities

Natural or man-made, mirrors have from the origin played a critical part in self-consciousness, and more precisely in self-awareness of physical identity. Whereas portraits are social, asynchronous, and symbolic representations, mirrors are personal, synchronous, and physical ones; hence their different roles, portraits abetting social identities, and mirrors reflecting physical ones. And selfies may come as the missing link between them.

With smartphones now customarily installed as bodily extensions, selfies may morph into recurring personal reflections, transforming themselves into a crossbreed between portraits, focused on social identification, and mirrors, intent on personal identity. That understanding would put selfies on an elusive swing swaying between social representation and communication on one side, authenticity and introspection on the other side.

On that account advances in technologies, from photographs to 3D movies, would have had a limited impact on the traction from either the social or physical side. But virtual reality (VR) is another matter altogether because it doesn’t only affect the medium between social and physical aspects, but also the “very” reality of the physical side itself.

Virtual Reality: Sense & Sensibility

The raison d’être of virtual reality (VR) is to erase the perception fence between individuals and their physical environment. From that perspective VR contraptions can be seen as deluding mirrors doing for physical identity what selfies do for social ones: teleporting individual personas between environments independently of their respective actuality. The question is: could it be carried out as a whole, teleporting both physical and social identities in a single package ?

Physical identities are built from the perception of actual changes directly originated in context or prompted by our own behavior: I move of my own volition, therefore I am. Somewhat equivalently, social identities are built on representations cultivated innerly, or supposedly conveyed by aliens. Considering that physical identities are continuous and sensible, and social ones discrete and symbolic, it should be possible to combine them into virtual personas that could be teleported around packet switched networks.

But the apparent symmetry could be deceitful given that although teleporting doesn’t change meanings, it breaks the continuity of physical perceptions, which means that it goes with some delete/replace operation. On that account effective advances of VR can be seen as converging on alternative teleporting pitfalls:

  • Virtual worlds like Second Life rely on symbolic representations whatever the physical proximity.
  • Virtual apparatuses like Oculus depend solely on the merge with physical proximity and ignore symbolic representations.

That conundrum could be solved if sense and sensibility could be reunified, giving some credibility to fused physical and social personas. That could be achieved by augmented reality whose aim is to blend actual perceptions with symbolic representations.

From Augmented Identities to Extended Beliefs

Virtual apparatuses rely on a primal instinct that makes us to believe in the reality of our perceptions. Concomitantly, human brains use built-in higher level representations of body physical capabilities in order to support the veracity of the whole experiences. Nonetheless, if and when experiences appear to be far-fetched, brains are bound to flag the stories as delusional.

Or maybe not. Even without artificial adjuncts to the brain chemistry, some kind of cognitive morphing may let the mind bypasses its body limits by introducing a deceitful continuity between mental representations of physical capabilities on one hand, and purely symbolic representations on the other hand. Technological advances may offer schemes from each side that could trick human beliefs.

Broadly speaking, virtual reality schemes can be characterized as top-down; they start by setting the mind into some imaginary world, and beguiles it into the body envelope portraying some favorite avatar. Then, taking advantage of its earned confidence, the mind is to be tricked on a flyover that would move it seamlessly from fictional social representations into equally fictional physical ones: from believing to be with superpowers into trusting the actual reach and strength of his hand performing corresponding deeds. At least that’s the theory, because if such a “suspension of disbelief” is the essence of fiction and art, the practicality of its mundane actualization remains to be confirmed.

Augmented reality goes the other way and can be seen as bottom-up, relying on actual physical experiences before moving up to fictional extensions. Such schemes are meant to be fed with trusted actual perceptions adorned with additional inputs, visual or otherwise, designed on purpose. Instead of straight leaps into fiction, beliefs can be kept anchored to real situations from where they can be seamlessly led astray to unfolding wonder worlds, or alternatively ushered to further knowledge.

By introducing both continuity and new dimensions to the design of physical and social identities, augmented reality could leverage selfies into major social constructs. Combined with artificial intelligence they could even become friendly or helpful avatars, e.g as personal coaches or medical surrogate.

Further Readings

External Links

IoT & Real Time Activities

The world is the totality of facts, not of things.

Ludwig Wittgenstein

As the so-called internet of things (IoT) seems to bring together people, systems and devices, the meaning of real-time activities may have to be reconsidered.

Real Time Representation (Lucy Nicholson)
Fact and Broadcast (Lucy Nicholson)

Things, Facts, Events

To begin with, as illustrated by marketed solutions like SIGFOX, the IoT can be described as a fast and stripped-down communication layer carrying not so much things than facts and associated raw (i.e non symbolic) events. That seems to cut across traditional understandings because the IoT is dedicated to non symbolic devices yet may include symbolic systems, and fast communication may or may not mean real-time. So, when applications network requirements are to be considered, the focus should be on the way events are meant to be registered and processed.

Business Environments Cannot be Frozen

Given that time-frames are set according primary events, real-time activities can be defined as exclusive ongoing events: their start initiates a proprietary time-frame perceived from the outside as being without duration, i.e as if nothing could happen until their completion, with activities targeting the same domain supposed to be frozen.

ccc
Contrary to operational timing constraints (left), real-time ones (right) are set against the specific (i.e event driven) time-frames of targeted domain.

That principle can be understood as a generalization of the ACID (Atomicity, Consistency, Isolation, Durability) scheme used to guarantee that database transactions are processed reliably. Along that understanding a real-time business transaction would require that, whatever its actual duration, no change from other transactions would be accepted to its domain representation until the business transaction is completed and its associated outcomes duly committed. Yet, the hitch is that, contrary to systems transactions, there is no way to freeze actual business ones which will continue to be carried out notwithstanding suspended registrations.

Accesses can be fully synchronized within DB systems (single clock), suspended within functional architectures, consolidated within environment.
Accesses can be fully synchronized within DB systems (single clock), suspended within functional architectures, consolidated within environment.

In that case the problem is not so much one of locks on DB as one of dynamic alignment of managed representations with the changing state of affairs in their actual counterpart.

Yoking Systems & Environments

As Einstein famously said, “the only reason for time is so that everything doesn’t happen at once”. Along that reasoning coupling constraints for systems can be analyzed with regard to the way events are notified and registered:

  • Input flows: what happens between changes in environment (aka facts) and their recording by applications (a).
  • Processing: could the application be executed fully based on locally available information, or be contingent on some information managed by systems at domain level (b).
  • Output flows: what happens between actions triggered by applications and the corresponding changes in the environment (c).
vvvv
How to analyze the coupling between environment and system.

It’s important to remind that real-time activities are not defined in absolute time units: they can be measured in microsecond as well as in aeons, and carried out by light sensors or by snails.

A Simple Decision Routine

Deciding on real-time requirements can therefore follow a straightforward routine:

  • Should changes in relevant external objects, processes, or expectations, be instantly detected at system’s boundaries ? (a)
  • Could the interpretation and processing of associated events be carried out locally, or be contingent on information shared at domain level ? (b)
  • Should subsequent actions on relevant external objects, processes, or expectations be carried out instantly ? (c)
vvv
Coupling with the environment must be synchronous and footprint local or locked.

Positive answers to the three questions entail real-time requirements, as will also be the case if access to shared information is necessary.

 What about IoT ?

Strictly speaking, the internet of things is characterized by networked connections between non symbolic things. As it entails asynchronous communication and some symbolic mediation in between, one may assume that the IoT cannot support real-time activities. That assumption can be checked with some business cases given as examples.

Further Readings

External Links