Re: Safety of brain-like AGIs

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Feb 28 2007 - 19:34:39 MST


Shane Legg wrote:
>
> I don't know of any formal definition of friendliness, in which case,
> how could I possibly ensure that an AGI, which doesn't yet exist,
> has a formal property that isn't yet defined? That applies to all
> systems, brain-like or otherwise.

As I remarked on a previous occasion, for purposes of discussion we may
permit the utility function to equal the integral of iron atoms over
time. If you can't figure out how to embody this utility function in an
AI, you can't do anything more complicated either.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT