Re: Article: The coming superintelligence: who will be incontrol?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Aug 02 2001 - 12:11:14 MDT


James Higgins wrote:
>
> When I first read "Staring Into the Singularity" I started thinking about
> how much more, well just more/different, an SI would be than ourselves. As
> it has been discussed in this room, most people believe that a human can't
> even talk with an SI though a binary (light on/off) connection without
> having them be controlled by the SI. Given such vast intellect,
> capabilities and the freedom to fully alter its own code I don't believe
> there is anything we can program into an AI that will ensure friendliness
> when it gets to SI status. We're just not anywhere near smart enough to do
> that. I really wish I didn't believe this (it would make me happier), but
> this is what extensive thought on the matter leads me to believe.

But this argument generalizes well beyond AI. How then can there be
anything that we could program into a human - much less something
evolution accidentally programmed into humans - that would ensure
Friendliness when a human gets to SI status? If an augmented human can
solve (and want to solve) the second-order problems of altruistic
superintelligence, then so can a transhuman first-order Friendly AI built
along the CFAI architecture; that, at any rate, is the claim I make and
the standard to which CFAI must be held.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT