Ben vs. the AI academics...

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Oct 23 2004 - 20:05:09 MDT


Hmmm...

I just had a somewhat funny experience with the "traditional AI research
community"....

Moshe Looks and I gave a talk Friday at the AAAI Symposium on "Achieving
Human-Level Intelligence Through Integrated Systems and Research." Our talk
was an overview of Novamente; if you're curious our conference paper is at

http://www.realai.net/AAAI04.pdf

Anyway, I began my talk by noting that, in my opinion, "Seeking human-level
intelligence is not necessarily the best approach to AI. We humans aren't
all that smart anyway, in the grand scheme of things; and it may be that the
best approach to superintelligence doesn't even pass through humanlike
intelligence, since human wetware is pretty different from computer
hardware." Wow, did that piss off the audience!! (an audience which, as I
later found out, consisted largely of advocates of the SOAR and ACT-R
cognitive modeling systems, which seek to model human cognition in detail,
not by modeling human brain function but via tuning various logic and search
algorithms to have similar properties to human cognition.) Moshe and I went
on to give a talk on Novamente, which was hard to do because we (like many
others who were accepted for the symposium but not part of the AAAI inner
circle) were allocated only 12 minutes plus 3 minutes for questions.... (Of
course, it's not hard to summarize Novamente at a certain level of
abstraction in 12 minutes, but it's pretty much impossible to be at all
*convincing* to skeptical AI "experts" in that time-frame.) So far as I
could tell, no one really understood much of what we were talking about --
because they were so irritated at me for belittling humanity, and because
the Novamente architecture is too different from "the usual" for these guys
to really understand it from such a compressed presentation.

After our talk, one of the more esteemed members of the audience irritatedly
asked me how I knew human intelligence wasn't the maximal possible
intelligence -- had I actually experienced superior intelligences myself? I
was tempted to refer him to Terrence McKenna and his superintelligent
9-dimensional machine-elves, but instead I just referred to computation
theory and the obvious limitations of the human brain. Then he asked
whether our system actually did anything, and I mentioned the Biomind and
language-processing applications, which seemed to surprise him even though
we had just talked about them in our prsentation.

Most of the talks on Friday and Saturday were fairly unambitious, though
some of them were interesting technically -- the only other person
presenting a real approach to human-level intelligence, besides me and
Moshe, was Pei Wang. Nearly all of the work presented was from a
logic-based approach to AI. Then there were some folks who posited that
logic is a bad approach and AI researchers should focus entirely on
perception and action, and let cognition emerge directly from these. Then
someone proposed that if you get the right knowledge representation,
human-level AI is solved and you can use just about any algorithms for
learning and reasoning, etc. In general I didn't think the discussion ever
dug into the really deep and hard issues of achieving human-level AI, though
it came close a couple times. For instance, there was a talk describing
work using robot vision and arm-motion to ground linguistic concepts -- but
it never got beyond the trivial level of using supervised categorization to
ground particular words in sets of pictures, or using preprogrammed
arm-control schema triggered by the output of a language parser in
preprogrammed ways..

There was a lot of talk about how hard it is for academics to get funding
for academic research aimed at human-level AI, and tomorrow morning's
session (which I plan to skip -- better to stay home and work on Novamente!)
will include some brainstorming on how to improve this situation gradually
over the next N years. It seemed that the only substantial funding source
for the work presented in the symposium was DARPA.

Then, Sat. night, there was a session in which the people from our symposium
got together with the people from the 5 other AAAI symposia being held in
the same hotel. One member from each symposium was supposed to get up and
give a talk. I was surprised that the material described by some of the
other symposium leaders (e.g. agent-based computing, cognitive robotics)
actually was a little more relevant to human-level AI than most of the
material in our human-level AI symposium. For some reason, nearly all of
the human-level-AI folks were from a GOFAI-ish perspective, even though the
other symposia had a lot more diversity, with people focusing on neural
nets, evolutionary programming, and so forth as well as logic.

The speech of the person who summarized our symposium for the larger group
was particularly amusing. He began by quoting me (not by name) about how
humans aren't that smart and we should be aiming higher. He then showed
some video clips illustrating how smart humans are and how dumb robots are.
For instance: a human expertly navigating through a crowd to get to an
attractive woman, versus a robot awkwardly crashing into a wall. Quite
funny and all. Hey, I was pleased to have made an impression!! The summary
of our symposium, in his view, was that human-level intelligence is
inestimably far away and no one has any idea of how to come remotely close
to achieving it. But nevertheless, he posited, we should promote journals
and conferences on human-level AI, and the creation of test suites for the
comparison of wannabe-human-level AI systems, so as to encourage progress.

Welll....

It's great, of course, that a small segment of the mainstream AI community
is willing to admit that the field of AI has wandered far from its roots,
and needs to get back to its original ambitions. It's unfortunate that
nearly all of these folks see no hope of the field of AI achieving
human-level intelligence anytime soon, however.... They have so little hope
that they're not really willing to entertain concrete hypotheses as to how
to achieve human-level or superior AI in the near term....

Anyway, in addition to catching up with Pei and Bill Hibbard, I made a
couple useful new contacts at the conference -- and interestingly, both were
industry scientists rather than academics. For some reason there was more
broad AI vision in the industry AI researchers than the academics, in this
symposium at any rate.

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT