From: Ben Goertzel (firstname.lastname@example.org)
Date: Sun Jan 28 2001 - 11:09:43 MST
> Ben Goertzel wrote:
> > So, suppose that Friendliness to humans is one of the goals of
> an AI system,
> > probabilistically weighted along with all the other goals.
> "One of" the goals? Why does an AI need anything else? Friendliness
> isn't just a goal that's tacked on as an afterthought; Friendliness is
> *the* supergoal - or rather, all the probabilistic supergoals are
> Friendship material - and everything else can be justified as a subgoal of
Creation of new knowledge, and discovery of new patterns in the world, are
that I believe are innate to humans in addition to our survival and
oriented goals. Should we not supply AI's with them too? Webmind is being
with these goals, because they give it an intrinsic incentive to grow
> > Then, my guess is that as AI's become more
> > concerned with their own social networks
> > and their goals of creating knowledge and learning new things,
> the weight of
> > the Friendliness goal is going to
> > gradually drift down.
> Among the offspring and thus the net population weighting, or among the
> original AIs? If among the original AIs, how does the percentage of time
> spent influence the goal system? And why aren't the "goal of creating
> knowledge" and the "goal of learning new things" subgoals of Friendliness?
They just aren't subgoals of "friendliness to humans" ... or of
under any definition of that term that seems natural to me ...
This archive was generated by hypermail 2.1.5 : Tue May 21 2013 - 04:00:19 MDT