Video Transcript:
This second supplementary video to the fifth of the 10-part whiteboard video series features one of the four macro technological paradigms allowed by the Acontextual Model of Cognition. This has its roots in the inclusion of the programmer element in modeling synthetic intelligence. Each of these paradigms will disrupt not only our approach to AI but is likely to have implications way beyond.
The intelligence dynamics were rendered from the viewpoint of a human observer who is integral to the conception of the model. Let’s say the observer is also the programmer of the system. Though the programmer was kept out of the system, the systemic functioning depends on three extracts from the programmer – the object understanding (condensed as the Semantic), the intent towards the object (abstracted as the Subject-Object contract), and the capability to maneuver the object (mimicked as the Heuristics). This triad works seamlessly synchronized within a human mind, providing an explanation not only to every civilizational achievement but to the most mundane human behavioral quirks. The Acontextual model lays forth the contexts for a synthetic simulation of such synchronization in two broad steps. The first involves internalizing the triad to the intelligence system by way of delinking these from the programmer. Adjectivization as it is called, lets the notion of intelligence within the system be seen as an adjective of the agent programs. Notwithstanding the hype around advances in the field of Artificial Intelligence, human synthesized intelligence has always been a noun. That’s to say, intelligence has definitely been produced through the medium of programming but the programs themselves could never be rendered intelligent, the way humans are. Evidently this impacts our ability to endow any autonomy or originality to synthetic intelligences.
With the source of sustenance cut off, the intelligence system requires an alternate impetus to keep going. Moreover, the impetus needs to be from within the system that sets the internalized triad in motion. Such self-sustaining systemic dynamism is perpetually fueled by the Acontextualization hypothesis, channeled through the component subject agencies. The system-level sustenance engine that makes this possible is termed Acontextualization, the second step in the simulation of the human intellect. This could be seen as relating to the consistency aspect of the intelligence system.
To provide a perspective of what the technological paradigm entails, we relate it to the successive waves of AI at a broad level. The criterion for appraising the evolution gets drawn from the proposed model, with separation of the Semantic, the Contract, and the Heuristic signifying the Adjectivization aspect. The Acontextualization characteristics indicate the systemic support for so internalized notions and various possible intersections thereof. The definitive inspiration in conceiving the Acontextual model, the human intellect of course scores perfect on all counts, including the ultimate aim of an ‘Intentful comprehension-based maneuvering’ of given object by the intelligence.
Reasoning intensive programs of the first wave of AI depend on the handcrafted knowledge by the SMEs in terms of explicit rules and instructions. The programs take shape exclusively outside the system that get infused into the notional intelligence system. The intellect denoted by the medium of programs forms the entirety of possible systemic outcome with no additional intelligence generational intent. In effect, there has never been an aspiration whatsoever of any Adjectivization or Acontextualization.
The second wave of AI is characterized by the abilities to perceive and learn through statistical learning. The mainstay technology of this wave, the neural networks, rely on the probabilistic character of the object in extracting intelligence in terms of statistical data patterns. The intelligence so realized, applied within given domain appears to affect Adjectivization and Acontextualization of the Semantic – a separation of the program’s understanding of the object from that of the programmer. However, this capability comes at an enormous cost of sacrificing indefinite inward extensibility and the domain neutrality of the Semantic. While the heuristic and intent aspects are practically untouched, the approach is further constrained by the need for good trainable data. Neural nets may at best be seen as an emulative workaround in lieu of any conceivable method for Adjectivization & Acontextualization of the Semantic.
The incumbent third wave of AI focuses on the contextual adaptation of programs by way of abstracting the knowledge. In its early stages, the wave looks to being defined by impressive strides in Generative AI. The competencies at display by the innovations of this wave, that include ChatGPT, DALL-E, Autopilot etc., clearly step into the heuristic capability aspects of an Acontextual intelligence system. These applications are taught to mimic patterns, structures, and characteristics of a given data set and then use that understanding to create seemingly new and unique outputs. However, the limitations of the previous wave get carried over, alongside the central technology of the Neural networks. Essentially, the emulative illusion of Adjectivization and Acontextualization gets extended to the heuristics in this wave. The excitement due to the recent advances may partly be attributed to what looks like the ‘comprehension-based maneuvering’ of the object – be it a textual prompt, request for a program or art on demand. Not surprisingly, the future fourth wave of AI is expected to be about Autonomy, the missing link in completing the picture.
Although the propensity of Artificial Intelligence as a field of research to move along the direction set by Adjectivization & Acontextualization is to be expected, our approach remains grounded in an emulation of the output. As might be evident, the simulation of human intellect is easier theorized than achieved, with any instance of plausible synthetic intelligence necessarily constituting the deep-seated human element, either directly or indirectly. The unprecedented features of Project POC laid out in the whiteboard video number 4 have their roots in a never realized before such separation (though partial) between the programmer and the programs. In fact, it will be seen why the separation could not be accomplished as yet and how it might never be possible unless drastic changes are made to the way we do AI. Furthermore, the envisaged change to the existing AI paradigm is not merely architectural or procedural but involves a revamp of the de facto foundations on which the human body of knowledge is erected. Such a fundamental shift to the prevailing contexts impacts every formal aspect of human episteme including our sciences, mathematics, language, logic, programming etc., not to mention our own patterns of inquiry and skepticism. The theory of mind advanced in this research essentially portrays all human progress as having evolved within the contexts defined by ingrained notions of object realism, as a special case of an otherwise more holistic theory of knowledge. The human semantic neighborhoods uncovered thus promise to hold the key not only to a general human-like variant of AI but to several persistent problems faced by humanity.
Consequently, each of the 4 macro technological paradigms derived from the Acontextual model abstract more generic notions related to intelligence along with respective theoretical and technological objectives. In case of the current discussion, intelligence is not mere subject agent competence in maneuvering the object, rather a systemic construct that in turn sustains the system in its operation and progression by way of adhering to and consistently enhancing the agent’s contract with the object. Adjectivization and Acontextualization form the theoretical arms to evolve an operational framework for such an understanding. Newer tech stacks conceived to align with such generic notions sustain the altered ways of designing, developing, training, testing etc. of the AI systems.
Finally couple of questions to ponder over, to better appreciate the scale of potential disruption follow –
- Given micro-algorithms represent the object understanding but not the associated conceptions resultant of the programmer’s intelligence, what might be the composition of these programs?
- Broadly speaking, Acontextualization is about providing choice as an axiom along with internalizing other components of the axiomatic frameworks like Zermelo-Fraenkel or first order logic to the systemic synthetic agents. Given any human conceivable math is founded upon these frameworks, what sort of mathematical formulation could depict the process of Acontextualization?
Leave a Reply