Re: An essay I just wrote on the Singularity.

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Wed Dec 31 2003 - 13:29:53 MST


On Wed, Dec 31, 2003 at 12:12:35PM -0800, Tommy McCabe wrote:
> Ummm... A transhuman AI would almost certainly have a
> well-implemented shaper network (see CFAI: Shaper/anchor
> semantics), hopefully with one of the shapers being causing as
> little involuntary suffering as possible, and therefore, after
> seeing that the aliens wanting to destroy all of us, would prevent
> them from destroying us while causing as little harm to the aliens
> as possible. A Friendly transhuman probably would act Friendly
> even toward a being with a goal of destroying it, but that doesn't
> mean that the being would be permitted to destroy it. Saying that
> out of the space of minds-in-general, only human-like minds should
> be treated Friendly is ungrounded anthropocentrism.

That's a really good point, actually. Perhaps a bit more rarified
than I want to get into in my essay, but I'll ponder. Thanks.

-Robin

-- 
Me: http://www.digitalkingdom.org/~rlpowell/  ***   I'm a *male* Robin.
"Constant neocortex override is the only thing that stops us all
from running out and eating all the cookies."  -- Eliezer Yudkowsky
http://www.lojban.org/             ***              .i cimo'o prali .ui


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT