From Narrow AI to AGI via Narrow AGI?

Ben Goertzel
SingularityNET
Published in
10 min readJul 31, 2019

--

Over the 15 years since I launched the term “Artificial General Intelligence,” I have put quite a lot of effort into emphasizing the difference between Narrow AI and AGI — between AI that does one particular sort of thing intelligently, versus AGI that can learn substantially new kinds of skills and abilities and can learn to handle situations and contexts dramatically different from those it was programmed or trained for.

I’ve been heartened to see the concept of AGI take hold to an increasing degree, to the point where recently Microsoft invested $1B in OpenAI (which in fact is no longer especially open-ness oriented and should perhaps now be rebranded as ClosedAI, or at least ClopenAI?) specifically to foster and leverage its work toward Artificial General Intelligence.

I have also pointed out, from time to time, that neither the Artificial, the General nor the Intelligence in the “AGI” concept is necessarily a profoundly fundamental category for describing real-world computational agents. These are useful conceptual tools but we need to be flexible in how we interpret, combine and leverage them.

As I think about the path we have ahead of us in the next decade, as we progress from today’s Narrow AI revolution to tomorrow’s AGI revolution, it increasingly seems to me that we need to consider the possibility of certain sorts of “Narrow AGI” systems as key parts of the transitional path. A Narrow AGI system is one that displays powerful general intelligence that is, however, heavily biased in capability toward some particular domain, such as biomedical research or math theorem proving or urban planning.

Given the heavily commercial focus of the contemporary AI field, it seems likely that the path to full human-level AGI is going to pass through a variety of Narrow AGI systems of progressively increasing generality and capability. Crafting and deploying and teaching Narrow AGI systems is going to be an engineering challenge — and also a conceptual challenge, as these systems stretch our understanding of the foundations of intelligence, computation, life and mind.

AGI and SCADS

Artificiality is flawed because: As AGI systems are released into the natural and human worlds and adapt based on their experiences, they become more and more natural and less usefully conceived as artificial.

Generality is flawed because, no intelligent system operating with finite spatial, temporal and energetic resources can actually be totally general. Humans have more GI than any existing AI system but are not too effective at, say, running mazes in 750 dimensions. Algorithmic information theory places clear bounds on the generality of any finite-resources system’s intelligence.

Intelligence is flawed because of its nebulosity. IQ tests don’t hold up across cultures or animal species, let alone across different categories of an intelligent system. Conceptions of intelligence as goal-achievement or reward-optimization fail to capture key practical aspects of human intelligence. Weaver’s notion of “open-ended intelligence” hits closer to the mark, viewing intelligence as the ability of a system to self-modify and self-improve and expand its horizons and scope in interaction with its environment — but this remains a nebulous conception, albeit stimulatingly and delightfully so.

Considering questions like — Is the Internet an AGI system? What would it mean for a SingularityNET network to become an AGI system? — one runs up against the limitations of the A, the G and the I.

I prefer to think about complex AGI-type systems as SCADS, Self-organizing Complex Adaptive Systems. This is also a somewhat nebulous construction, but at least its nebulosity is clear to see! A SCADS is an open-ended intelligent system, recognizing patterns in itself and the world, growing and changing based on its interactions and constantly re-sculpting the boundaries that define it as a system. The goals that it aims to achieve, and the number of its resources that it spends achieving goals versus defining new top-level goals or less directedly self-organizing, are adapted continually based on the system’s internal and interactional dynamics.

Narrow AGIs

What I’m going to tell you about here is a particular sort of SCADS that my father Ted Goertzel once labelled “Narrow AGI.” I liked the term when he coined it, but I have generally avoided using it out of fear it might confuse people. But I haven’t thought of a better alternative, and now that AGI itself is gaining more popularity and currency, it seems the right time to introduce the Narrow AGI concept more emphatically into the discourse.

Reflecting on what the A, G and I mean a little, one can see that the notion of a Narrow AGI is not actually the oxymoron it may at first seem to be. There is a path from today’s Narrow AIs to tomorrows AGIs that passes through intermediate systems that are best thought of as Narrow AGIs. Furthermore, future AGI systems with transhuman generality of intelligence may operate in part by coordinating the activity of multiple Narrow AIs and Narrow AGIs.

What Ted and I mean by a Narrow AGI is something like a human being that is very, very good at some particular domain area such as medical research, mechanical engineering or visual arts — operating with profound general intelligence within that domain area — but kind of a dipshit in other domains.

A Narrow AGI medical researcher would be able to analyze any kind of biological or medical dataset, read any biological or medical research paper, operate any kind of computer-controlled bio lab equipment, design new biological or medical experiments, make new scientific hypotheses, write papers on its conclusions — and generally do everything that a really well qualified human medical researcher can do, while at work.

But this Narrow AGI medical researcher might be terrible at playing board games, recognizing human facial expressions, proving math theorems or controlling vehicles.

This would be something very different than current Narrow AI systems. In the present scheme of commercial AI, what we have is one AI system that extracts relationships from biomedical research papers (say, “drug X cures disease Y with probability p in patients of type T”), another AI system that identifies patterns in genomic data, another AI system that identifies visual patterns in radiological data, another AI system that generates research reports from data-files and meta-data, another AI system that guesses which experiments to run based on prior related experimental results, etc.

Each of these Narrow AI systems is very hyper-specialized in scope. E.g. an AI system that extracts relationships from biomedical research abstracts would have to be extensively retuned and redesigned to extract relationships from full research papers. An AI system that extracts meaning from research papers, if it’s going to deal with figures and tables, would need to invoke specific subordinate AI systems designed and trained especially for these purposes — one AI system for graphs, one AI system for diagrams, one AI system for tables, etc.

Connecting all these Narrow AI systems together into an overall usable AI-based product requires substantial human expertise. In the SingularityNET ecosystem, we have launched the for-profit company Singularity Studio to create products aimed at enterprises in specific vertical markets, where each product weaves together a number of particular AI agents running in the SingularityNET decentralized AI platform.

Many of the AI agents leveraged by a Singularity Studio product that carries out, say, financial risk management or auto traffic analytics or clinical trial analysis will be Narrow AIs tuned for very specific functions — say, time-series prediction or the recognition of pedestrians and autos in videos, or the learning of classification rules from genomics data. AI agents carrying out more abstract functions like probabilistic logical reasoning or evolutionary hypothesis creation may also be invoked — but then these will be connected to highly specialized AI agents in application-specific ways conceived by human product designers.

A Narrow AGI for biomedical analytics might leverage a small army of Narrow AI tools carrying out specific intelligent functions — but it would figure out how to combine these on its own, and it would know how to create and train new micro-application-specific narrow AIs for its own purposes as needed. It would be like a Singularity Studio product that was able to ongoingly redesign and improve itself, based on possessing a reasonable holistic understanding of its own goals and capabilities and how it fits into the overall scheme of the human and computational world.

Comparing the generality of intelligence, or the degree of intelligence, of a system like this with that of a typical human, will ultimately be a pointless proposition. A Narrow AGI for math theorem proving, with ability to invent new math theorems more creatively than a human mathematician and prove new math theorems that stump the human math community — would be more general than the human mind in meaningful ways. But clearly, less general in other ways.

This is the same roadblock one hits when asking whether a chess champion who can’t navigate simple social situations should really be considered more intelligent than a master politician who is mediocre at chess. There are different flavors of intelligence. In some cases, one system will be plainly more intelligent than another one, in essentially every context and dimension. In many cases — like the comparison of a politician vs. a chess player, or a Narrow AGI versus a human — we have a “six of one versus half-dozen of the other” type comparison.

From Narrow AGI to Full Human-Level AGI

One may argue that it won’t be a very big step from Narrow AGIs like this to full-on Human-Level AGIs (HLAGIs) that exceed human general intelligence in all regards. In the big picture, this is almost certainly true. A Narrow AGI whose superpower is AI theory, design and engineering, is going to be able to figure out how to architect new Narrow AGIs and ultimately how to connect multiple Narrow AGIs together in a way that their diverse intelligences can synergize into something greater.

When the medical-research Narrow AGI and the theorem-proving Narrow AGI and the car-driving Narrow AGI and the smart-city-management Narrow AGI and the home-service-robotics Narrow AGI and the tutoring-system Narrow AGI are connected together in such a way that they can share their internal knowledge and intuitions with each other — one will have a SCADS that will self-organizing into something that is not so narrow anymore. This is one possible route from the Narrow AIs of today to the full-on HLAGIs (with beyond-human capabilities in various regards) and then transhuman AGIs of tomorrow.

The cross-connection of different Narrow AGIs into an overall coordinated AGI system with a greater level of generality will be eased if the various Narrow AGIs are using interoperable software frameworks, and if they are using knowledge representations founded on domain-independent abstractions. The SingularityNET and OpenCog frameworks are, respectively, designed to foster precisely these sorts of interoperability and abstraction.

However, the transition from Narrow AGIs to full-on HLAGI, to an integrated AGI system that is more generally intelligent than humans in essentially every way, may take a bit of time. I don’t think it will take decades. But I think it might well take years — years during which the Narrow AGIs themselves will be improving and maturing and becoming more and more AGI-ish and less and less narrow.

In a grand sense, these transitional years will be a blip along the historical path from early AI systems to powerful AGIs and superintelligences. But for those of us living through this historical path, the particulars of Narrow AGIs during this transition phase will be highly critical to understand and work with.

The Narrow AGIs I envision will become less and less artificial as they progress — they will be carrying out natural functions within the human economy and the biological and physical world, and shaping their intelligence as they learn to better fill their socioeconomic niches.

These Narrow AGIs will move gradually toward greater and greater generality, but along a different path than a typical human toddler — more like special types of human idiot savants, to stretch just a little.

There may also be research-oriented AGI systems that progress along similar paths to human toddlers. I have been agitating for 2 decades for an “AGI Preschool” research program, in which one takes an AGI-oriented architecture like OpenCog and embodies it in a robotic or virtual preschool-type environment, attempting to direct its learning in a manner similar to what happens with human babies and toddlers.

This seems a very important research direction, but the fact is that right now there is a lot more financial, computing and human energy going into commercial application oriented AI — which makes me think that this “AGI toddler” research is likely to be outpaced in some relevant ways by Narrow AGIs doing directly economically useful things. It may be that full-on human-level AGI comes from the interconnection of various commercial application oriented Narrow AGIs with a research-oriented AGI toddler, and the emergence of new self-organized intelligent structures in the composite system thus formed.

Gardner’s theory of Multiple Intelligences divides human general intelligence into multiple categories: linguistic, logical-mathematical, musical, bodily-kinesthetic, spatial, interpersonal, intrapersonal, naturalist and existential. Each of the domain-focused Narrow AGIs I envision will exceed human level in one or two of these multiple intelligences. An AGI toddler will advance toward human level in all of these areas concurrently.

Via building and interacting with and cross-connecting systems like these, we will stretch our conceptions of artificiality, generality and intelligence — and this is a good thing. We are building a self-organizing, complex adaptive hardware and software network involving numerous intelligent components with diverse types of intelligence possessing different degrees and varieties of generality — and we are integrating ourselves with this network, transforming our own individual and collective human intelligence in the process.

If you would like to learn more about SingularityNET, we have a passionate and talented community which you can connect with by visiting our Community Forum. Feel free to say hello and introduce yourself here. We are proud of our developers and researchers that are actively publishing their research for the benefit of the community; you can read the research here.

For any additional information, please refer to our roadmap. Do not forget to subscribe to our newsletter to stay informed about all of our developments.

--

--