From: Brian Atkins (firstname.lastname@example.org)
Date: Tue Mar 20 2001 - 22:47:10 MST
A sandbox is indeed a good way to envision that particular scenario. The
sysop monitors things, and simply prevents ("throws an exception") certain
things (a bullet being fired out of the gun when you try to kill some poor
human) from happening since it inhabits/controls the matter.
Another possibility would be to have a "guardian angel" style, rather than
a hard sandbox. This depends on certain feasibility issues (can the sysop
arbitrarily protect you from everything in the area of its control), but
might be more appealing to the average person. So, someone can fire a bullet
at you, but you have a choice and it either bounces off or is otherwise not
effective at hurting you.
In either case the sysop must control the matter. If someone was allowed
to both create a competitor SI to the sysop, AND it was allowed to gain
control of the matter then it's all over for Sysop #1.
Is it possible to have other scenarios where the sysop does not infect
all the mass in the solar system, while still ending all evil? I think it
could be done through heavy surveillance, including both real and virtual
realities. But this would be more dangerous IMO since if someone escapes
the surveillance, and builds a competitor SI that then infects all the matter
then you've got problems.
Declan McCullagh wrote:
> Is it just me, or do other folks think of a Java applet when
> people talk about a Friendly superintelligence letting us play
> in a sandbox? :)
> On Mon, Mar 19, 2001 at 01:13:56PM -0800, Mitchell Porter wrote:
> > Eliezer says occasionally that the Sysop scenario
> > is just a *guess* as to what a Friendly super-AI
> > would choose to do. I think people would be more
> > likely to remember this if there was another
> > scenario on offer, and in fact my own default
> > picture of a Friendly Singularity is this other
> > one, of Universal Uplift.
> > The basic meaning is presumably clear, although
> > variations on it are possible (uplift sentients
> > of Earth; uplift non-sentients as well; uplift
> > all sentients ever within reach, whether alien
> > or Earth-evolved). The essential idea: if
> > everyone is a Friendly superintelligence, who
> > needs a Sysop? Only the non-sentient, the newly
> > sentient, and the not yet uplifted.
> > The obvious criticism of Universal Uplift is
> > a Borg-like imposition; which is why I think of
> > it working by enticement rather than imposition.
> > In effect, the uplifter leaves a trail of
> > crumbs for the upliftee to follow, and by the
> > time you reach the end of the trail, you're a
> > Friendly superintelligence.
> > __________________________________________________
> > Do You Yahoo!?
> > Get email at your own domain with Yahoo! Mail.
> > http://personal.mail.yahoo.com/
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT