A3ilabs-Project POC : ChatGPT vs Project POC (Whiteboard series Video#4/10)

Video Transcript:

At the outset, whereas ChatGPT is the most impressive yet emulation of human linguistic intelligence, Project POC relies upon a phenomenological simulation of human intelligence. Thus the difference between these intelligences is not merely a matter of abilities or efficiency but of the kind. For the given prompt, the possibility of a notional conversation between software agencies as shown helps in laying out the characterization of the simulated variant of AI. Point of note here is that every word used in the dialogue is actually meant by the programs in exactly the same way humans do – with particular emphasis on the words ‘agree’ and ‘understand’.

From the characterization perspective, the contrast is drawn with respect to Originality, Autonomy, and Consistency of the intelligence output. While ChatGPT upholds a contract vis-à-vis the trained Large Language Models, the Omniject provides the source of truth for an understanding of the world in case of Project POC. Resultant consistency enforces a common basis for cross-domain acts of intelligence such as straightening an ill-formed sentence or, say, a casual straightening of creased tablecloth. These seemingly disparate behavioral patterns may in principle be enabled with shared sets of micro-algorithms. Autonomy of the synthetic agencies shows itself in carrying out the intelligent actions without being explicitly instructed. Such autonomous behavior leverages the system native vantage afforded to software agents by the micro-algorithms. For example, this lets a synthetic bot communicate nonrandom inventive ideas with an intrinsic purpose and motivation rather than merely responding to prompts. Originality of intelligence outcome owes to the fact that the response determining algorithms get built from scratch based on synthetic cognition. Comprehension, common sense, sentience, and sapience can all be seen as contributing to such cognition in Project POC. It is worth noting here that the algorithms denote the intermediary output rather than an input to the intelligence system.

From a consumption standpoint, the implied generality of the intelligence provides a centralized baseline for uncertainty handling & resolution, ability of qualitative reasoning, general learning competency based on comprehension, unsupervised evolution of intelligent agents, a motivation for all levels of systemic communication, and effectively a trans-domain system-level Occam’s razor. These translate to the program capabilities of planning & changing plans as deemed fit, decision making, generic problem solving involving imagining and theorizing solutions, exploring the unfamiliar or unprogrammed, and more in a perpetual intelligence generation setup much like the world we live in. Essentially, Project POC is all about enabling artificial thought within synthetic agencies – with general intelligence being an inevitable outcome of such a design.

Finally, one of the most instructive corollaries of the research concerns the innate checks and balances in place that regulate any attempt at a general variant of AI or what’s popularly called AGI. Broadly speaking, this principle enforces a non-negotiable tradeoff of machine-like capabilities in the simulation of any human-like general intelligence, while its emulative counterpart of the day can be seen as governed by Moore’s law. The insurance offered by such complementarity constraint limits any possible conscious machine intelligence with an inherent freewill to at best reach human levels of intelligence. Naturally placed upper bound on attainable general intelligence is a consequence of ingrained alignment of AI in the approach. For instance, endowing a qualitative feel for the numbers to the programs takes off the quantitative edge traditionally enjoyed by the machines over man. Another example involves the perfect fact recall possible in case of machines that can in itself be seen as a hindrance for harnessing other sources of knowledge, say, experience. That is to say, the human vulnerabilities are as much a part of synthetic agent enablement as the strengths themselves in the interest of general AI.

This in conjunction with the possibility of endowing human values as core axioms rather than as relatively fallible rules renders the intelligence safe and characteristically benevolent. On the other hand, a qualitatively constrained and weaker variant of AI possible with current day technologies could potentially cause harm, not of course with a machinic intent it could never acquire but a misaligned intent of the human handlers of the AI.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *