Re: A Sysop alternative

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Apr 09 2001 - 09:45:17 MDT


James Higgins wrote:
>
> Ah, I may have an interesting compromise. I do agree that us mere mortals
> could use something and would be helpless against SIs. How about if the
> sysop ONLY enforced happiness towards humans. Once uploaded, SIs would be
> free to do as they pleased, as long as they didn't interact with
> humans. When they did interact with humans, they would be forced to follow
> a set of friendly rules. Humans would also be subject to these friendly
> rules. Eliezer, what do you think about this?

This makes sense if, and only if, superintelligent entities are knowably
Friendly. If I recall correctly, that's been your premise from the
beginning.

Calling something "natural morals" really doesn't help the discussion.
Either something is desirable (to the speaker, or to humanity), or it's
not. Either something is physically possible, or it's not. Either
something is physical-plus-Sysop possible, or it's not. As far as I can
tell, Gordon's proposal consists of making unFriendliness physically
impossible. Is this ontologically possible? Who knows? In any case, I
fail to see how it *morally* differs from the Sysop scenario in any way
whatsoever, except insofar as you and Gordon are still running on
anthropomorphic instincts that distinguish between the case of a Sysop
that does X and a set of physical laws that do exactly the same thing.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT