What Is Artificial General Intelligence?
When it was founded over 50 years ago, the AI field was directly aimed at the construction of "thinking machines"—that is, computer systems with human-like general intelligence. The whole package, complete with all the bells and whistles like self, will, attention, creativity, and so forth.
But this goal proved very difficult to achieve; and so, over the years, AI researchers have come to focus mainly on producing "narrow AI" systems: software displaying intelligence regarding specific tasks in relatively narrow domains.
This "narrow AI" work has often been exciting and successful. It has produced, for instance, chess-playing programs that can defeat any human; and programs that can diagnose diseases better than human doctors. It has produced programs that translate speech to text, analyze genomics data, drive automated vehicles, and predict stock prices. The list goes on and on. In fact, mainstream software like Google and Mathematica utilize AI algorithms (in the sense that their underlying algorithms resemble those taught in university courses on AI).
There is a sarcastic saying that once some goal has been achieved by a computer program, it is classified as 'not AI.' And, as with much sarcasm, there is some underlying truth to this remark. But the deeper truth that these narrow-AI achievements have taught us is how different all this advancement in the creation of specialized AI tools really is from what's needed to create a thinking machine. All these narrow-AI achievements, useful as they are, have not yet carried us very far toward the goal of creating a true thinking machine.
Some researchers believe that narrow AI eventually will lead us to general AI. This for instance is probably what Google founder Sergey Brin means when he calls Google an 'AI company.'1 His idea seems to be, roughly speaking, that Google's narrow-AI work on text search and related issues will gradually lead to smarter and smarter machines that will eventually achieve true human-level understanding and cognition.
On the other hand, some other researchers—including the author—believe that narrow AI and general AI are fundamentally different pursuits. From this perspective, if general intelligence is the objective, it is necessary for AI R&D to redirect itself toward the original goals of the field—transitioning away from the current focus on highly specialized narrow AI problem solving systems, back to confronting the more difficult issues of human level intelligence and ultimately intelligence beyond the human level. With this in mind, I and some other AI researchers have started using the term Artificial General Intelligence or AGI, to distinguish work on general thinking machines from work aimed at creating software solving various 'narrow AI' problems.
Some of the work done so far on narrow-AI can play an important role in general AI research—but in the AGI perspective, in order to be thus useful, this work will have to be considered from a different perspective. My own view, which I'll elaborate here, is that the crux of intelligence mostly has to do with the emergent structures and dynamics that arise in a complex goal-achieving system, allowing this system to model and predict its own overall coordinated behavior patterns. These structures/dynamics include things we sloppily describe with words like "self", "will" and "attention."
In this view, thinking of a mind as a toolkit of specialized methods—like the ones developed by narrow-AI researchers—is misleading. A mind must contain a collection of specialized processes that synergize together so as to give rise to the appropriate high-level emergent structures and dynamics. The individual components of an AGI system might in some cases resemble algorithms created by narrow-AI researchers, but focusing on the individual and isolated functionality of various system components is not terribly productive in an AGI context. The main point is how the components work together.
I strongly suspect the interplay between specialization and generality in the human brain is subtler than is commonly recognized. The brain certainly has some kick-ass specialized tools, such as its face recognition algorithms. But these are not the essence of its intelligence. Some of the brain's weaker tools, such as its very sloppy algorithms for reasoning under uncertainty, are actually more critical to its general intelligence, as they have subtler and more thoroughgoing synergies with other tools that help give rise to important emergent structures/dynamics.
Now, the word "general" in the phrase "general intelligence" should not be overinterpreted. Truly and totally general intelligence—the ability to solve all conceptual problems, no matter how complex—is not possible in the real world.2 Mathematicians have proved that it could hypothetically be achieved by theoretical, infinitely powerful computers. But the techniques usable by these infinitely powerful hypothetical machines don't have much to do with real machines or real brains.
But even though totally general intelligence isn't pragmaticaly achievable, still, it's clear that humans display a kind of general intelligence that goes beyond we see in chess programs, data analysis programs, or speech-to-text software. We are able to go into new situations, figure them out, and create new patterns of behavior based on what we've learned. A human can deal with situations of a radically different nature than anything existing at the time of their birth—but a narrow AI program typically starts behaving stupidly or failing altogether when confronted with situations different than those envisioned by its programmer. We humans, dominated as we often are by our simian ancestry, nevertheless have a leg up on Deep Blue, Mathematica or Google in the fluidity and generality department. We understand, to a degree, who and what we are, and how we are related to our environment—and this understanding allows us to deal with novel contexts creatively, adaptively and inventively. And this, I posit, comes out of the emergent structures and dynamics that arise in the complex systems that are our brains, due to the interactions of various specialized components within a framework that evolved to support precisely this sort of emergence.
My own quest to create powerful AGI has centered on the design and engineering of a particular software system, the Novamente Cognition Engine (NCE), which is described in the companion essay "The Novamente Approach to AGI." I believe Novamente is a viable approach with the capability to take us all the way to the end goal. However, if for some reason the Novamente project doesn't get there soon enough, I believe someone else is going to get there via some conceptually related approach, differing in the details. There are sure to be many different workable approaches to AGI ... just as now, 150 years after the experts said human flight was impossible, we humans take to the air in a variety of ways, including helicopters, propeller planes, jet planes, rockets and so forth.
One of the reasons AGI became so unfashionable within the AI field was precisely the existence of claims such as the one I just made in the previous paragraph. In the early 1970s when I was first discovering science fiction, there were already AI researchers touting their particular algorithmic approaches and claiming that "AI is just around the coner." But just as with cars or airplanes or printing presses or any other technology, eventually the time for AI will come—and, with full knowledge of the history of the field, I predict it will come soon, so long as a reasonable degree of funding (from government, business or wherever) is directed toward AGI.
One of the messages I always try to get across regarding AGI is that, due to the convergence of a variety of sciences and technologies, the end goal is closer than most people think. The community of scientists working in the artificial intelligence and cognitive science fields have made some serious, substantive strides. They have generated a lot of very important insights, and what remains to be done to create AI is to put all the pieces together, in an appropriate integrative architecture that combines specialized components to give rise to the necessary emergent structures and dynamics of mind. At this point, it's not a matter of if; it's a matter of when we achieve the goal—and of which of the multiple viable pathways is achieved first.
No comments:
Post a Comment