The New Cabalists

Pure Generative schemes use number crunching to mine meanings through neural networks built from tokenised sentences, sounds, and images. As such they can be seen as the brainchild of new age cabalists.

Cabalists Neural Network


The sudden irruption on the business scene of Generative technologies is characterized by two symmetrical aspects: on the business side it’s a flurry of announces for alleged use cases, on the technical side it’s the scarcity and opacity of specifics about what happens under the hoods of contraptions akin to self-driving vehicles, without GPS or black boxes. Nevertheless, like self-driving technologies, generative ones don’t have to replace humans to bring wide-ranged benefits as intelligent helpers.

Mining & Meanings

As it’s often the case with sudden emergent technologies, marketing pitches range widely and randomly, especially when technical details are concealed. Yet, from the outside, developments appear to be focused on two kinds of communication: conversational (typically editing tools) and mediated (typically search engines). The difference is critical because while conversational communication can get meanings from immediate context, that’s not possible for mediated communication which thus must rely on shared symbolic representations.

Conversational & Mediated Communication

Online search engines being an advertising cash cow, they have instantly become the locus of a fierce competition between big techs eager to defend (Google) or extend (Microsoft) the pastures under their guardianship. Given the potential disruption on barriers to entry, incumbents try to keep a blanket on their technology, swapping transparency for precautionary advices meant to prevent liabilities.

Compared to search engines, driven by business opportunism in open-ended contexts, editing tools are driven by clearly defined purposes set in bounded contexts. That transparency enables a wide range of knowledge-based supporting functions (tutoring, training, teaching, discovery, etc.), with significant momentum already observed for publishing and programming tools.

That parallel split with regard to applications (search vs editing) and transparency (or lack thereof) reflects the paradox of generative AI technologies which are meant to dealt with emerging (generative) as well as reasoned (intelligent) contents.

The Matter of Communication

Up-to-date generative tools like GPT-4 are multi-modal, i.e. they can process sounds, images, and texts; beyond that little is known about their modus operandi. Nevertheless, some assumptions can be made about the ways they’re doing their job.

Language models can be summarily defined by the way they use grammars to associate facts to meanings:

  • Layered models deal separately with words (lexicon), structures (syntax), sentences (semantics), and contexts (pragmatics)
  • Semantic grammars combine syntax (roles) and semantics (references)
  • Large language models (LLM) build meanings by applying machine learning algorithms to massive samples of actual texts
Generative vs Grammar-based Language Models

Compared to layered and semantic grammars, which extract meanings through explicit grammatical categories (dotted arrow), generative languages (solid arrow) bypass grammars and rely instead on massive samples to assign meanings to tokenised mages, texts, and sounds.

By contrast empiric (or scientific) languages use domain-specific syntax and semantics to map facts into categories and models; and formal (or logic) ones use generic and truth-preserving syntax and semantics to align concepts and presumptive models with categories

That comparison points to the main caveat of generative approaches: their reliance on conversational shortcuts between context and meanings (i.e. no mediation through explicit categories) induces an intrinsic lack of transparency.

The Matter of Transparency

Given the ubiquity of disinformation and fake news, chatbots’ transparency is arguably a primary concern; hence the need of traceability and accountability, the former with regard to sources and reasoning, the latter with regard to judgment. That can be achieved with explicit paths between problem and solution spaces:

  • Problem space: neural networks representing tokenised documents
  • Extensional solution space: knowledge graphs representing contexts and meanings
  • Intensional (intermediate) solution space: grammars used as backbones of semantic networks
Meanings & Transparency

Grammar-based approaches (dashed arrow) combine syntax and semantic threads to weave the fabric of meanings:

  • Given syntactic categories (backbones), thesauruses and taxonomies are used to define semantic networks
  • Semantic networks are aligned with Descriptive/Prescriptive grammar frameworks
  • Ontologies and thesauruses are used to refine meanings with regard to contexts and purposes

That would enable some degree of traceability for the categories deemed to be relevant (taxonomies) and the reliability of sources and causal chains (ontologies); dialog could do the same for accountability when judgements are called for by uncertainties and/or conflicts.

Generative approaches are built on the assumption that syntactic and semantic categories (aka semantic grammars) are implicit in discourses and that meanings can thus be directly mined from massive samples. To that effect, assuming a preparatory parsing for language-specific core lexical and syntactic constructs, machine learning is applied through dialog to amend and refine the meanings:

  • Inputs are tokenised and neural networks built using language-specific core syntactic and lexical constructs
  • Thesauruses and deep learning are used to morph neural networks into semantic ones
  • Dialog and reinforcement learning are used to fine-tune the weights of semantic connectors

Transparency is thus literally laid in the eye of the beholder, first as he sets the course through his questioning, and then as he puts his (blind) trust in the answers.

Learners & Machines

As it happens, a reasoned assessment of GAI capabilities is already replacing misguided initial hype, and the diagnosis is clear: while impressive and wide-ranged benefits can be expected from large language models technologies, most are depend on the transparency of machine learning capabilities.

Broadly speaking machine learning can rely on three basic schemes:

  • Symbolic learning: programming languages (e.g. Lisp and Prolog) can be used to define, apply, assess, and improve rules according to outcomes.
  • Supervised Deep learning (DL) uses neural networks to reproduce cognitive processes, and profiled data samples to train systems through examples.
  • Reinforcement learning (RL) replaces profiled samples and training with extensive raw inputs and statistical inference.

In theory language models could combine all three schemes: reinforcement learning for thesauruses, symbolic learning for descriptive or prescriptive grammars, supervised learning for conversations. In practice generative languages are supposed to bypass explicit grammars and fully rely on machine learning capabilities for both grammars and conversations. So, while conversations with users can enable a two-pronged learning for both humans and machines, it does nothing for the transparency of knowledge supporting mediated communication.

Tech firms are already trying to improve transparency through the integration of generative technologies with more established tools, directly or though dialog-based reinforcement learning, typically for statistics and programming.

Where to Find Significant Others

Nonetheless, to ensure the full benefits of generative technologies, current breakthroughs in language models must be balanced by corresponding advances in knowledge management. 


External READINGs

%d bloggers like this: