RE: FAI/intelligent goals (was: Fighting UFAI)

From: H C (lphege@hotmail.com)
Date: Thu Jul 14 2005 - 12:45:51 MDT


>From: Joel Pitt <joel.pitt@gmail.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: FAI/intelligent goals (was: Fighting UFAI)
>Date: Thu, 14 Jul 2005 19:13:43 +1200
>
>>I think in order to establish your position, you need to identify
>>what, if any, ultimate goals will lead to the most intelligent
>>configurations.
>
>I think that a goal of fulfilling the goals of all other existing
>intelligent agents would lead to the most intelligent configuration. How ve
>does this I'm not sure, since there would be many conflicting goals and
>vis'd need to select the optimum solution, clearly very difficult. But I'm
>not a super intelligent friendly AI.
>
>Of course there would have to be alot of immutable constraints. I.e.
>preserving the existance of all the agents, so that it doesn't kill
>everyone bar one person - who's goals are then fulfilled thus gaining
>100% goal satisfaction.
>
>Joel

Obviously. Friendliness is entirely defined by human goals. Not just human
goals, but ALL human goals.

Friendly AI is simply an AI that derives all of its desire from satisfying
ALL human goals.

This inherently implies that although one human may have the goal to kill
everyone that is American, this is in opposition with all of the Americans
goals, thus an optimal solution must be found. Perhaps it will pull some
strings and bring education into this American killer's life (willingly). Or
maybe through its benevolent technologies it creates, human society will
inherently change to such a degree that killing Americans doesn't even mean
anything anymore.

Also, it has been said that an objective morality is meaningless, because
morality is relative. Morality is relative in the sense that one thing can
be both bad and good depending upon the goal. However, just because good and
bad are relative to the goal doesn't mean that it is impossible to
understand the framework of these goals and thus determine an objective
morality.

And also, just because understanding objective morality is possible, doesn't
mean its necessary to know for our future Friendly existence. If you can
independently think of a scenario (the ones I think of involve some form of
sysop) that satisfies what DEFINES objective morality, then we can all live
in a reality that is objectively moral and Friendly without actuallying
knowing the current state of objective morality for every decision. Our
decisions would just be inherently bound by what is objectively moral. Like,
uh, don't permanently end the existence of a sentient being. Or, find a way
to do things without injusty imposing yourself.

An AI with the goal system to satisfy all human goals is a possible FAI. The
only possible way it could go wrong is through overconfidence (in its
actions, understanding, or feelings).



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT