Re: AI debate at San Jose State U.

From: Chris Capel (pdf23ds@gmail.com)
Date: Mon Oct 17 2005 - 12:32:00 MDT


On 10/17/05, Richard Loosemore <rpwl@lightlink.com> wrote:
> Chris Capel wrote:
> > To be clear, these are your comments and not a quote? You want to
> > discuss this with the list?
> >
> > On 10/16/05, Woody Long <ironanchorpress@earthlink.net> wrote:
> >
> >>Some points --
> >>
> >>1. "Humanoid intelligence requires humanoid interactions with the world" --
> >>MIT Cog Project website
> >
> >
> > Granted, but SL4 isn't really interested in humanoid intelligence. The
> > position of the SIAI and many on this list, if I may speak for them,
> > is that strictly humanoid intelligence would not likely be
> > Friendly--it would be terribly dangerous under recursive
> > self-modification, and likely lead to an existential catastrophe.
> > Friendly AI is probably not going to end up being anything close to
> > "humanoid".
>
> You do not speak for the entire SL4 list, unless or until I (at least)
> unsubscribe from it.

As I said.

> As far as I am concerned, the widespread (is it really widespread?) SL4
> assumption that "strictly humanoid intelligence would not likely be
> Friendly ...[etc.]"

It's a position, though perhaps not an assumption, held by SIAI; and
as far as the views and research of the SIAI are one of the main
topics of this list, and this list is owned by a founding member of
that organization, the assumption can be said to be widespread within
the context of this list. But it's most certainly not widespread among
AI researchers as a whole, with yourself as a good example case. I
think Ben Goertzel also holds that the position goes too far, and Phil
Goetz doubts the very idea of an AI driven singularity, IIRC (and I
may not). Both of these people are major contributors to this list,
and real AI researchers. (I'm not an AI researcher, I program bank
software.) So there's certainly a diversity of opinions.

One of the goals of the SIAI is to increase awareness and respect in
the AI community at large about the dangers of recursive
self-improvement with no prior planning. Of course, part of this is
convincing people it's a danger to be taken seriously. If the past is
any evidence, (and if I understand it well enough,) this hasn't really
met with much success yet.

> is based on a puerile understanding of, and contempt
> of, the mechanics of human intelligence.

We've had some threads on this, sure, but the SIAI position has by no
means been established. I wonder what kind of discussion would be
necessary to really establish this one way or the other. I wasn't
around for the discussions with Goertzel about this issue.

Chris Capel

--
"What is it like to be a bat? What is it like to bat a bee? What is it
like to be a bee being batted? What is it like to be a batted bee?"
-- The Mind's I (Hofstadter, Dennet)


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT