Re: Article: The coming superintelligence: who will be incontrol?

From: James Higgins (jameshiggins@earthlink.net)
Date: Thu Aug 02 2001 - 12:45:16 MDT


At 02:11 PM 8/2/2001 -0400, you wrote:
>James Higgins wrote:
> >
> > When I first read "Staring Into the Singularity" I started thinking about
> > how much more, well just more/different, an SI would be than ourselves. As
> > it has been discussed in this room, most people believe that a human can't
> > even talk with an SI though a binary (light on/off) connection without
> > having them be controlled by the SI. Given such vast intellect,
> > capabilities and the freedom to fully alter its own code I don't believe
> > there is anything we can program into an AI that will ensure friendliness
> > when it gets to SI status. We're just not anywhere near smart enough to do
> > that. I really wish I didn't believe this (it would make me happier), but
> > this is what extensive thought on the matter leads me to believe.
>
>But this argument generalizes well beyond AI. How then can there be
>anything that we could program into a human - much less something
>evolution accidentally programmed into humans - that would ensure
>Friendliness when a human gets to SI status? If an augmented human can
>solve (and want to solve) the second-order problems of altruistic
>superintelligence, then so can a transhuman first-order Friendly AI built
>along the CFAI architecture; that, at any rate, is the claim I make and
>the standard to which CFAI must be held.

I'm not suggesting that we take neural enhanced humans up to the SI
stage. Quite the contrary, actually. My belief is that it requires
greater than human intelligence to ensure that a friendly AI would
successfully transition into a friendly SI. If you, Eli, were 100 times
more intelligent than you are now you would be able to account for many
more possible problems than even 10,000 people probably could working together.

This is not to say that we shouldn't try. We should, I just consider our
chances low at present. The closer the creators of the SI are to SI
status, the more they could understand the process involved and the most
desirable end result. Thus humans who are 10, 100 or even 1,000 times
smarter than we are today would be very beneficial to the Singularity
cause. But, as stated previously, I don't think we'll get said humans
before we get an SI.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT