Re: friendly ai

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Jan 29 2001 - 20:46:10 MST


>
> But then, if you accept that friendliness to humans is just one among many
> important goals, then, you have to see that there's room for AI's to evolve
> in such a way
> that the importance of friendliness to humans to them gradually decreases,
> as they become
> more & more interested in non-human things
>

Yep. Frankly I believe that humans (as we know them) are not meant to
go on forever, or more precisely, simply cannot go on forever as a
species. Eventually this species will transform and/or create something
different and far more intelligent that it cannot compete with or
something of this kind will come along from outside. IMHO, humans are
at an impasse where we must transform if we are to have any continuance
at all. But as we transform we will become increasingly something other
than human.

So, to me, Friendliness to humans is a difficult to define goal to start
with and likely to be a short-term goal. Friendliness to all sentiences
would be a bit better. But I still don't see how such an abstract
uber-Commandment wired into the very being of an SI is actually workable
or dependable.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT