From: Ben Goertzel (firstname.lastname@example.org)
Date: Sun Jan 28 2001 - 12:25:04 MST
> Do you seriously think that a Friendly AI which totally lacked the
> behaviors and cognitive complexity associated with learning would be more
> effective in making Friendliness real?
Quite possibly, YES
This is the "Honest Annie" scenario envisioned by Stanislaw Lem
The possibility is that an AI, interested in discovering and creative new
rapidly evolves to the point where humans and their various dilemmas,
puzzles and problems
are not very intriguing to it
Even if you assume that learning & creativity begin as subgoals of
Friendliness (which I don't
quite buy), I can't think of a more plausible example of "subgoal
alienation" than this...
> Ergo, the behaviors associated with learning are valid subgoals of
They are indeed valid subgoals of friendliness
However, the weight that they would be assigned as subgoals of friendliness
might not be
(In constructing Webmind's goal system, I suspect we're assigning a higher
weight to learning &
creativity than would be necessary if they were considered only as subgoals
of friendliness -- because
I'm interested in evolving the smartest, most knowledgeable Ai system
And, they're very strong candidates for long-term, self-organizing,
spontaneous subgoal alienation...
So it still seems to me that, while there's a pretty strong case against the
"evil AI's destroy humans
scenario", there's not a strong case against the Honest Annie scenario...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT