Some Scientific and Engineering Challenges for the Mid-term Future Of AI


By Edward Feigenbaum, Stanford University


When the terms "intelligence" or "intelligent" are used by scientists, they are referring to a large collection of human cognitive behaviors-people thinking.  When life scientists speak of the intelligence of animals, they are asking us to call to mind a set of human behaviors that they are asserting the animals are (or are not) capable of.   When computer scientists speak of artificial intelligence, machine intelligence, intelligent agents, or computational intelligence,  we are also referring to that set of human behaviors.


When Turing proposed what we now call the "Turing Test" in 1950, he thought that a computer would pass his test for intelligence by 2000.  But the set of behaviors called "intelligence" proved to be more multifaceted and complex tha he or we imagined.


This talk proposes a set of grand challenges for AI that are based on modifications to the Turing Test.  The challenges are aimed at scientific knowledge and reasoning (i.e. "Einstein in the box" as differing from, for example, robotics). The challenges require for successful performance: natural language reading and understanding abilities, and machine learning for knowledge acquisition. But the challenges proposed do not involve the full spectrum of common sense reasoning abilities that the original Turing Test requires. And it may be possible to meet these challenges successfully in a mid-range future of 20-30 years,  or even less if we focus and get busy.