AI & Embedded Insanity

Summary

Bill Gates recently expressed his concerns about AI’s threats, but shouldn’t we fear insanity, artificial or otherwise ?

vvvv
Human vs Artificial Insanity: chicken or egg ? (Peter Sellers as Dr. Strangelove)

Some clues to answers may be found in the relationship between purposes, designs, and behaviors of intelligent devices.

Intelligent Devices

Intelligence is generally understood as the ability to figure out situations and solve problems, with its artificial avatar turning up when such ability is exercised by devices.

Devices being human artifacts, it’s safe to assume that their design can be fully accounted for, and their purposes wholly exhibited and assessed. As a corollary, debates about AI’s threats should distinguish between harmful purposes (a moral issue) on one hand, faulty designs,  flawed outcomes, and devious behaviors, (all engineering issues) on the other hand. Whereas concerns for the former could arguably be left to philosophers, engineers should clearly take full responsibility for the latter.

Human, Artificial, & Insane Behaviors

Interestingly, the “human” adjective takes different meanings depending on its association to agents (human as opposed to artificial) or behaviors (human as opposed to barbaric). Hence the question: assuming that intelligent devices are supposed to mimic human behaviors, what would characterize devices’ “inhuman” behaviors ?

ccc
How to characterize devices’ inhuman behaviors ?

From an engineering perspective, i.e moral issues being set aside, a tentative answer would point to some flawed reasoning, commonly described as insanity.

Purposes, Reason, Outcomes & Behaviors

As intelligence is usually associated with reason, flaws in the design of reasoning capabilities is where to look for the primary factor of  hazardous devices’ behaviors.

To begin with, the designs of intelligent devices neatly mirror human cognitive activity by combining both symbolic (processing of symbolic representations), and non symbolic (processing of neuronal connections) capabilities. How those capabilities are put to use is therefore to characterize the mapping of purposes to behaviors:

  • Designs relying on symbolic representations allow for explicit information processing: data is “interpreted” into information which is then put to use as the knowledge governing behaviors.
  • Designs based on neuronal networks are characterized by implicit information processing: data is “compiled” into neuronal connections whose weights (representing knowledge ) are tuned iteratively based on behavioral feedback.
Symbolic (north) vs non symbolic (south) intelligence
Symbolic (north) vs non symbolic (south) intelligence

That distinction is to guide the analysis of potential threats:

  • Designs based on symbolic representations can support both the transparency of ends and the traceability of means. Moreover, such designs allow for the definition of broader purposes, actual or social.
  • Neuronal networks make the relationships between means and ends more opaque because their learning kernels operate directly on data, with the supporting knowledge implicitly embodied as weighted connections. They make for more concrete and focused purposes for which symbolic transparency and traceability are less of a bearing.

Risks, Knowledge,  & Decision Making

As noted above, an engineering perspective should focus on the risks of flawed designs begetting discrepancies between purposes and outcomes. Schematically, two types of outcome are to be considered: symbolic ones are meant to feed human decisions making, and behavioral ones directly govern devices behaviors. For AI systems combining both symbolic and non symbolic capabilities, risks can arise from:

  • Unsupervised decision making: device behaviors directly governed by implicit knowledge (a).
  • Embedded decision making: flawed decisions based on symbolic knowledge built from implicit knowledge (b).
  • Distributed decision making: muddled decisions based on symbolic knowledge built by combining different domains of discourse (c).
Unsupervised (a), embedded (b), and distributed (c) decision making.
Unsupervised (a), embedded (b), and distributed (c) decision making.

Whereas risks bred by unsupervised decision making can be handled with conventional engineering solutions, that’s not the case for embedded or distributed decision making supported by intelligent devices. And both risks may be increased respectively by the so-called internet of things and semantic web.

Internet of Things, Semantic Web, & Embedded Insanity

On one hand the so-called “internet second revolution” can be summarized as the end of privileged netizenship: while the classic internet limited its residency to computer systems duly identified by regulatory bodies, the new one makes room for every kind of device. As a consequence, many intelligent devices (e.g cell phones) have made their coming out into fully fledged systems.

On the other hand the so-called “semantic web” can be seen as the symbolic counterpart of the internet of things, providing a whole of comprehensive and consistent meanings to targeted factual realities. Yet, given that the symbolic world is not flat but built on piled mazes of meanings, their charting is to be contingent on projections with dissonant semantics. Moreover, as meanings are not supposed to be set in stone, semantic configurations have to be continuously adjusted.

That double trend clearly increases the risks of flawed outcomes and erratic behaviors:

  • Holed up sources of implicit knowledge are bound to increase the hazards of unsupervised behaviors or propagate unreliable information.
  • Misalignment of semantics domains may blur the purposes of intelligent apparatus and introduce biases in knowledge processing.

But such threats are less intrinsic to AI than caused by the way it is used: insanity is more probably to spring from the ill-designed integration of intelligent and reasonable systems into the social fabric than from insane ones.

Further Readings

One thought on “AI & Embedded Insanity”

  1. Interesting read, I work in an Iot environment and work as a search engine specialist and understand the basis for machine learning, Googles Rank Brain and the semantic web as well…….
    AI just seems like a mistake, no matter how well engineered. The driving force seems to be to gain entry into eternity, which I believe is granted through death, not imprisoning one’s “mapped brain” into a emotionless machine. Its coming though, no matter if Bill Gates, Elon Musk and Stephen Hawkins combined all say its truly…. just a bad idea.

Leave a Reply

Discover more from Caminao's Ways

Subscribe now to keep reading and get access to the full archive.

Continue reading