Self-driving vehicles should behave like humans, here is how to teach them so.
The eventuality of sharing roads with self-driven vehicles raises critical issues, technical, social, or ethical. Yet, a dual perspective (us against them) may overlook the question of drivers’ communication (and therefore behavior) because:
- Contrary to smart cars, human drivers don’t use algorithms.
- Contrary to humans, smart cars are by nature unethical.
If roads are to become safer when shared between human and self-driven vehicles, enhancing their collaboration should be a primary concern.
Driving Is A Social Behavior
The safety of roads has more to do with social behaviors than with human driving skills, as it depends on human ability, (a) to comply with clearly defined rules and, (b) to communicate if and when rules fail to deal with urgent and exceptional circumstances. Given that self-driving vehicles will have no difficulty with rules compliance, the challenge is their ability to communicate with other drivers, especially human ones.
What Humans Expect From Other Drivers
Social behavior of human drivers is basically governed by clarity of intent and self-preservation:
- Clarity of intent: every driver expects from all protagonists a basic knowledge of the rules, and the intent to follow the relevant ones depending on circumstances.
- Self-preservation: every driver implicitly assumes that all protagonists will try to preserve their physical integrity.
As it happens, these assumptions and expectations may be questioned by self-driving cars:
- Human drivers wouldn’t expect other drivers to be too smart with their interpretation of the rules.
- Machines have no particular compunction with their physical integrity.
Mixing human and self-driven cars may consequently induce misunderstandings that could affect the reliability of communications, and so the safety of the roads.
Why Self-driving Cars Have To Behave Like Human Drivers
As mentioned above, driving is a social behavior whose safety depends on communication. But contrary to symbolic and explicit driving regulations, communication between drivers is implicit by necessity; if and when needed, it is in urgency because rules are falling short of circumstances: communication has to be instant.
So, since there is no time for interpretation or reasoning about rules, or for the assessment of protagonists’ abilities, communication between drivers must be implicit and immediate. That can only be achieved if all drivers behave like human ones.
Turing’s Imitation Game Revisited
Alan Turing designed his Imitation Game as a way to distinguish between human and artificial intelligence. For that purpose a judge was to interact via computer screen and keyboard with two anonymous “agents”, one human and one artificial, and to decide which was what.
Extending the principle to drivers’ behaviors, cars would be put on the roads of a controlled environment, some driven by humans, others self-driven. Behaviors in routine and exceptional circumstances would be recorded and analyzed for drivers and protagonists.
Control environments should also be run, one for human-only drivers, and one with drivers unaware of the presence of self-driving vehicles.
Drivers’ behaviors would then be assessed according to the nature of protagonists:
- H / H: Should be the reference model for all driving behaviors.
- H / M: Human drivers should make no difference when encountering self-driving vehicles.
- M / H: Self-driving vehicles encountering human drivers should behave like good human drivers.
- Ma / Mx: Self-driving vehicles encountering self-driving protagonists and recognizing them as such could change their driving behavior providing no human protagonists are involved.
- Ma / Ma: Self-driving vehicles encountering self-driving protagonists and recognizing them as family related could activate collaboration mechanisms providing no other protagonists are involved.
Such a scheme could provide the basis of a driving licence equivalent for self-driving vehicles.
Self-driving Vehicles & Self-improving Safety
If self-driving vehicles have to behave like humans and emulate their immediate reactions, they may prove exceptionally good at it because imitation is what machines do best.
When fed with data about human drivers behaviors, deep-learning algorithms can extract implicit knowledge and use it to mimic human behaviors; and with massive enough data inputs, such algorithms can be honed to statistically perfect similitude.
That could set the basis of a feedback loop:
- A limited number of self-driving vehicles (properly fed with data) are set to learn from communicating with human drivers.
- As self-driving vehicles become better at the imitation game their number can be progressively increased.
- Human behaviors improve influenced by the growing number of self-driving vehicles, which adjust their behavior in return.
That is to create a virtuous feedback for roads safety.
PS: A Contrary Evidence
A trial in Texas demonstrates a contrario the potency of the argument by adopting the alternative policy, making clear that self-driving vehicles are indeed machines. Apart for being introduced as a public transport within a defined area and designed stops, two schemes are used to inhibit the imitation game:
- Appearance: design and color enable immediate recognition.
- Communication: external screens are used for textual (as opposed to visual) notification to pedestrians and other vehicles.
Future will tell if that policy is just a tactical step or a more significant shift towards a functional distinction between artificial brains and humans ones.
- Agile Collaboration & Social Creativity
- Alternative Facts & Augmented Reality
- AI & Embedded Insanity
- Detour from Turing Game
- AlphaGo: From Intuitive Learning to Holistic Knowledge
- AlphaGo & Non-Zero-Sum Contests
- New Year: 2016 is the One to Learn
- 2018: Clones vs Octopuses
- Business Agility & the OODA Loop
- Sifting through a web of things
- Semantic Web: from Things to Memes
- Things Behavior & Social Responsibility
- IoT & Real Time Activities
- “A more realistic route to autonomous driving”, The Economist
- “Whom should self-driving cars protect in an accident?” , The Economist
- Moral Machine