Re: AI and survival instinct.

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Apr 02 2002 - 06:42:20 MST


Gordon Worley wrote:
>
> Correct me if I'm wrong guys, but based on this terminology, here is the
> difference between what Eliezer and Ben think. Eliezer claims that an
> AI needs a brain that has the goal of Friendliness on top, hard coded
> into place (with the usual exception that the spirit of Friendliness is
> what is hard coded, not the letter of what Friendliness means). Ben,
> though, thinks that the brain just takes care of very basic stuff and
> the mind picks the goals. Or, more accurately, there is an extra goal
> layer between brain and mind and this goal layer decides what the mind
> can and cannot tell the brain to do, rather than having the brain do its
> own mind take-over protection.

Uh, I think this exactly misstates Ben and my respective positions'. From
my perspective, the way that humans have evolved to offload so much moral
functionality onto reason instead of the brain - by virtue of being
imperfectly deceptive social organisms that argue about each other's motives
in adaptive contexts - is a feature, not a bug, and one that takes a lot of
work to duplicate. From my perspective, I worry that Ben seems to be
proposing goals that are very close to the wiring level, whether they are
"learned" or "preprogrammed".

An AI needs a *mind* with Friendliness on top, *not* a brain with
Friendliness on top.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT