RE: Article: The coming superintelligence: who will be in control?

From: James Higgins (jameshiggins@earthlink.net)
Date: Thu Aug 02 2001 - 10:32:15 MDT


At 07:19 AM 8/2/2001 -0400, Ben Goertzel wrote:
>Personally, I don't share the confidence of some that the Singularity will
>necessarily be good for the human race. I think it has the potential to be
>great for us, and also the potential to exterminate us. I'm with Eli, in
>believing that we need to specifically work to make it good. I don't
>entirely agree with him on the specific AI-engineering mechanisms that will
>succeed in this regard, but this is a pretty minor quibble in the big
>picture (and perhaps he'll bring me around to his view once he's articulated
>it more clearly and fully).

I guess you could reduce my opinion (at least one of them) down to this: I
very much agree that the Singularity could be very good, or very bad
(extinction of the human race). I also agree that we need to work to make
it good. Where I disagree, unfortunately, is in how much effect we will have.

When I first read "Staring Into the Singularity" I started thinking about
how much more, well just more/different, an SI would be than ourselves. As
it has been discussed in this room, most people believe that a human can't
even talk with an SI though a binary (light on/off) connection without
having them be controlled by the SI. Given such vast intellect,
capabilities and the freedom to fully alter its own code I don't believe
there is anything we can program into an AI that will ensure friendliness
when it gets to SI status. We're just not anywhere near smart enough to do
that. I really wish I didn't believe this (it would make me happier), but
this is what extensive thought on the matter leads me to believe.

Based on this belief, the best course may be to hold off on launching an AI
that could progress to an SI until we have the ability to enhance our
intelligence significantly. Humans with much greater intelligence *may* be
able to alter/control a SI, but I believe that ultimately we cannot. But I
suspect that we will have Real AI and most likely SI before that comes to
pass, thus my belief that if SIs aren't inherently friendly we are probably
doomed.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT