CFAI criticism Re: Article: The coming superintelligence: who will be incontrol?

From: Brian Atkins (brian@posthuman.com)
Date: Thu Aug 02 2001 - 12:03:56 MDT


James Higgins wrote:
>
> When I first read "Staring Into the Singularity" I started thinking about
> how much more, well just more/different, an SI would be than ourselves. As
> it has been discussed in this room, most people believe that a human can't
> even talk with an SI though a binary (light on/off) connection without
> having them be controlled by the SI. Given such vast intellect,
> capabilities and the freedom to fully alter its own code I don't believe
> there is anything we can program into an AI that will ensure friendliness
> when it gets to SI status. We're just not anywhere near smart enough to do
> that. I really wish I didn't believe this (it would make me happier), but
> this is what extensive thought on the matter leads me to believe.
>
> Based on this belief, the best course may be to hold off on launching an AI
> that could progress to an SI until we have the ability to enhance our
> intelligence significantly. Humans with much greater intelligence *may* be
> able to alter/control a SI, but I believe that ultimately we cannot. But I
> suspect that we will have Real AI and most likely SI before that comes to
> pass, thus my belief that if SIs aren't inherently friendly we are probably
> doomed.
>

One thing SIAI is trying to do is make something of a science out of
Friendliness. It may be impossible, but we're trying. Here we have a
large difference of opinion between us and James on what would be the
optimum path to take due more or less to this one issue of Friendliness.
But so far James seems to be going on mostly a "gut feel" that Friendly
AI is not doable with a large degree of certainty. Do you have any specific
criticisms of FAI James that we could try to discuss? I can tell from
your other posts that your main concern is apparently a combo of "will it
work long term" and "can we be 100% certain", right? It seems like your
concern is addressed in the CFAI FAQ:

http://www.intelligence.org/CFAI/info/indexfaq.html#q_2.10

I have a hard time seeing how a human-level Gandhi-ish AI will suddenly run
amok as it gets smarter, except due to some technical glitch (which is a
separate issue we can talk about if you want).

Also, can you address this quote from Q3.3 in the FAQ, since it relates
to your suggestion the ideal path would be to wait:

"Nothing in this world is perfectly safe. The question is how to minimize
 risk. As best as we can figure it, trying really hard to develop Friendly
 AI is safer than any alternate strategy, including not trying to develop
 Friendly AI, or waiting to develop Friendly AI, or trying to develop some
 other technology first. That's why the Singularity Institute exists."

-- 
Brian Atkins
Director, Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT