RE: Revising a Friendly AI

From: Ben Goertzel (ben@intelligenesis.net)
Date: Tue Dec 12 2000 - 19:58:01 MST


> > Choice of new goals is not a rational thing: rationality is a tool for
> > achieving goals, and dividing goals into subgoals, but not for
> > replacing one's goals with supergoals....
>
> *This* is the underlying fallacy of cultural relativism. Even if
> rationality does not suffice to *completely* specify supergoals, this does
> not mean that rationality plays *no* role in choosing supergoals.

Sure, I'll buy that clarification.

I do tend to be a bit of a cultural relativist, though, I must admit.

> Do you really believe that you can alter someone's level of intelligence
> without altering the set of supergoals they tend to come up with?

Sometimes.... Surely, I know some very intelligent people whose supergoals
are
the same as those of much less intelligent people (Beer, T&A, ... ;)

> And overriding evolution's supergoals with a verbally transferred
> supergoal (as your schema would seem to have it?) is an evolutionary
> advantage because?
>

Because culture can adapt faster than biological evolution, I suppose.

> Instead of the AI suddenly waking up one morning and realizing that it can
> modify itself instead of waiting around for you to do it, there can be a
> smooth transition - a continuum - so that when the AI does "wake up one
> morning", it has an experiential base that guides its very fast
> decisions.

This is surely correct

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT