LLMs & the Matter of Regulations

Pondering what GPT would say (Zhigang-tang)

Motives & Motifs

The recent bursting of generative technologies has projected AI from known unknowns to unknown unknowns, triggering mirrored defensive and offensive attitudes, the former calling for new regulations or even a pause, the latter for a renewed assessment of risks and opportunities; both perspectives point to a common ground.

On the defensive side, the intrinsic opacity of generative AI technologies makes little room for informed governance, which shifts the focus on boundaries and consequently the integration with existing regulatory frameworks.

On the offensive side minds are setting on transparency issues and the potential benefits of hybrid models that could add some symbolic representations to neural networks, and consequently improve their transparency.

Defensive and offensive arguments thus converge on hybrid LLMs models framed by references to:

  • Facts, for data and percepts
  • Categories, for information and reason
  • Concepts, for knowledge and judgment
A frame for hybrid models

Interoperability could be achieved through:

  • Thesauruses, for the mapping of conversational sequences to meanings according to contexts and purposes
  • Taxonomies, for the mapping of features to categories
  • Logic, for the transformation of phrases and the translation of sentences

That would enable a pragmatic integration of generative AI regulations in terms of privacy, intellectual property, and decision-making.

LLMs & Regulatory Contexts

Assuming some dedicated symbolic references to facts, categories, and concepts, LLMs regulations should first be aligned with the ones pertaining to privacy and intellectual property.

LLMs Regulatory contexts

With regard to privacy LLMs regulations should ensure that any linking of features and taxonomies (encoded through parameters) with identified individuals could only be achieved through legitimate business applications, typically by preventing direct or inferred association of features (data) to identified individuals (information).

With regard to intellectual property, regulatory systems distinguish between copyrights and patents.

Copyrights dealt with the ownership of creative works, something that can be summarily defined as singular and nominal:

  • Singular: creative works are identified as a whole, not in reference to types or categories
  • Nominal: creative works are identified at conceptual level independently of structures or roles

For LLMs regulations, the objective would be to use thesauruses and built-in ontological modalities to manage the association between copyrights (concepts) and the corresponding actual footprints (documents).

Once limited to physical apparatuses, patents nowadays pertain to the design and modus operandi of the whole range of artifact: physical or symbolic, actual or virtual. With regard to LLMs it ensues that:

  • Compared to copyrights and privacy rules which pertain to actual resources, patents solely pertain to structural or functional categories
  • Compared to copyrights, patents do not protect their target but exclude or restrain their use

As suggested by the plethora of forestalling and patent trolls, harnessing LLMs regulations to patents would entail charting legitimate lays of the land, namely facts (e.g. pre-training), concepts (e.g. new purposes), and categories (new designs).

That could be achieved by using generative capabilities and modal logic to sort out categories not subsumed within patent footprints.

LLMs & Decision-making

LLMs regulations should also be considered on a broader AI perspective, notably when generative technologies are used to support decision-making. On that account the European Union proposed approach defines four levels of risks (minimal, limited, hight, unacceptable) depending on the nature of decisions; that approach is doubly flawed:

  • Risks are defined according to administrative distinctions made redundant by the digital integration of organizations and processes and by the ubiquity of AI technologies.
  • Obligations are defined in equivocal terms like “adequate”, “high” quality or level, “detailed”, “necessary”, “clear”, or “appropriate”.

Instead, risk profiles should be framed by the kind of cognitive capacity involved (observation, reasoning, judgment, experience) and the reliability of corresponding resources (data, information, knowledge):

Risk profiles
  • Routine: experience and observation (facts/data) support reasoning (categories/information) and judgement (concepts/knowledge) (a)
  • Operational: more experience and observation are needed to support judgment (b)
  • Critical: more experience and observation are needed to support reasoning and judgment (c)

LLMs and more generally AI regulations could thus put the focus on transparency obligations that would provide for (1) informed risk assessment by business analysts and knowledge managers and, (2) sound designs by architects and engineers.





%d bloggers like this: