Safety of brain-like AGIs

From: Shane Legg (shane@vetta.org)
Date: Wed Feb 28 2007 - 05:08:16 MST


I don't know of any formal definition of friendliness, in which case, how
could I possibly
ensure that an AGI, which doesn't yet exist, has a formal property that
isn't yet defined?
That applies to all systems, brain-like or otherwise.

If we consider informal definitions, then clearly some humans are friendly
and intelligent.
Thus at an informal level, I don't see any reason why a brain-like system
cannot be both.

If the role of the basal ganglia and amygdala were properly understood, it
may be possible
to construct a brain-like AGI that is far more consistently friendly than
any human could
ever be. Whether such a system is friendly in some strong formal sense
could only be
established once such a system was understood and a formal definition
actually existed.

Perhaps a very intelligent and friendly system, in the informal sense, might
be just what
we need to help us come up with a formal definition of friendliness for
super intelligences?

Shane

On 2/27/07, Adam Safron <asafron@gmail.com> wrote:
>
> Wouldn't it be a horrible idea to reverse engineer the human brain as a
> template for your AI? If you have an AI that's human-like, I fail to see
> how you could ensure the "friendliness" of subsequent iterations of the
> progressively developing intelligence. Am I missing something?
> Thanks.
> -adam
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT