Re: An essay I just wrote on the Singularity.

From: Tommy McCabe (rocketjet314@yahoo.com)
Date: Wed Dec 31 2003 - 13:12:35 MST


> > A Friendly AI doesn't have the supergoal of being
> nice to humans;
> > it has the supergoal of acting friendly toward
> other sentients in
> > general. A Friendly AI that is Friendly with
> humans shouldn't try
> > to blow the same humans to smithereens the minute
> they upload.
>
> All that's required there is for the AI to still
> recognize them as
> human, which hardly seems a stretch for general
> intelligence. I
> wouldn't necessarily want an FAI to be friendly to
> any aliens that
> came along. Not *necessarily*; it might be the
> right idea, it might
> not, but I'd like the FAI to have the mental option
> of deciding,
> "Umm, these aliens are fundamentally unfriendly to
> humans, and I
> can't fix that without re-writing their brains, so I
> better defend
> humanity (and myself) from them".

Ummm... A transhuman AI would almost certainly have a
well-implemented shaper network (see CFAI:
Shaper/anchor semantics), hopefully with one of the
shapers being causing as little involuntary suffering
as possible, and therefore, after seeing that the
aliens wanting to destroy all of us, would prevent
them from destroying us while causing as little harm
to the aliens as possible. A Friendly transhuman
probably would act Friendly even toward a being with a
goal of destroying it, but that doesn't mean that the
being would be permitted to destroy it. Saying that
out of the space of minds-in-general, only human-like
minds should be treated Friendly is ungrounded anthropocentrism.

__________________________________
Do you Yahoo!?
Find out what made the Top Yahoo! Searches of 2003
http://search.yahoo.com/top2003



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT