agi motivations (was Re: AI debate at San Jose State U.)

From: Michael Vassar (michaelvassar@hotmail.com)
Date: Sun Oct 23 2005 - 08:41:29 MDT


Richard Loosemore said.
>*if* we try to build a roughly humanoid AGI *but* we give it a mot/emot
>system of the right sort (basically, empathic towards other creatures), we
>will discover that its Friendliness will be far, far more guaranteeable
>than if we dismiss the humanoid design as bad and try to build some kind of
>"normative" AI system.

Do you mean formally provable (there's no such thing as *more* provable), or
only predictable with high confidence given certain conditions similar to
those under which it has been tested. I agree that a robot (humanoid or
not) with a human-like motivational system can be empirically demonstrated
to be friendly (and possibly a better poor approximation of Friendly than
can be expected from an AGI designer) in a wide range of situations.
However, because no empirical data is even potentially available regarding
the retention of Friendlyness in a post-singularity environment, formal
proofs are needed before entering such an environment. This requires an
analytically tractable motivational system, for safety reasons, and
human-like motivational systems are not analytically tractable. It is
possible that a non-Transhuman AI with a human-like motivational system
could be helpful in designing and implementing an analytically tractable
motivational system. A priori there is no more reason to trust such an AI
than to trust a human, though there could easily be conditions which would
make it more or less worthy of such trust.

>we should at least discuss all the complexity and subtlety involved in
>humanoid motivational/emotional systems so we can decide if what I just
>said was reasonable.

I agree that this is worth discussion as part of singularity strategy if it
turns out that it is easier to build a human-like AI with a human goal
system than to build a seed AI with a Friendly goal system. Eliezer's
position, as far as I understand it, is that he is confident that it is
easier for a small team with limited resources such as SIAI to build a seed
AI with a Friendly goal system within a decade or two than for it to build a
human-like AI, and that it is much more likely that he can complete a seed
AI substantially before anyone else in the world does than that he can
complete a human-like AI before anyone else does. In addition, completing a
human-like AI would not solve the requirement for a Friendly seed AI. It
would still be necessary to produce a Friendly seed AI before anyone created
an unFriendly one. Since it is probably easy to build an unFriendly seed AI
if you have a human-like AI, this is a critical problem.

>So should we not pursue the avenue I have suggested, if there is a
>possibility that we would arrive at the spectacular, but counterintuitive,
>conclusion that giving an AGI the right sort of motivational system would
>be the best possible guarantee of getting a Friendly system?

I think that the conclusion you are pointing to is not "spectacular but
counterintuitive". Rather, it is a spectacular conclusion that matches
almost everyone's intuitions but which is rather easily refuted. We should
still pursue the avenue in question because if neuromorphic engineering
advances rapidly we may not have any better options, and because ultimately
this option is not terribly different from the option of using human
intelligence and human goals to get FAI, which we are stuck with anyway.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT