Re: META: Dangers of Superintelligence

From: J. Andrew Rogers (andrew@ceruleansystems.com)
Date: Sun Aug 29 2004 - 22:29:45 MDT


On Aug 29, 2004, at 7:03 PM, fatherjohn@club-corsica.com wrote:
> It seems to fatherjohn that many, if not most endeavors, have a point
> of diminishing returns. By the same reasoning, I am reasonably
> confident that a SAI could not convince me to let it out. Even if it
> were a million times smarter than me.

Your reasoning is poor.

In essence, you are asserting or assuming that you can force the entire
universe to limit itself to game mechanics selected by you when it is
convenient for your theory. As a minor cog in the universe, this is
certainly not the case. In the game you want to play, there are no
rules other than those intrinsic to the universe. You'll have to get
used to that idea -- there is no "fatherjohn's rules" version of the
universe.

As for AI Jails, it is wise to take note of some wisdom from the world
of crypto: the fastest way to break strong encryption is with a rubber
hose. Viewing anything in this universe as an isolated system is a
dangerous and naive perspective.

j. andrew rogers



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT