From: Philip Goetz (email@example.com)
Date: Sat Mar 04 2006 - 13:45:12 MST
On 3/4/06, Michael Roy Ames <firstname.lastname@example.org> wrote:
> --- Actually, I don't think that this is a mistake. We are attempting to
> define Friendliness as a thing-that-can-be-verified - or at least exploring
> the idea as a possible way of maintaining goal stability through recursive
> self improvement.
I see that you want AI 1 to generate AI 2, and for AI 1 to verify for
you, the lowly human, that AI 2 is still friendly. That would seem to
be a code verification problem. Having an add-on to verify that the
proposed actions are safe is, I suppose, considered unacceptable
because the actions could be unFriendly in a way to subtle for anyone
but AI 2 to detect. This is all probably in the long long articles on
SIAI that I haven't read yet.
Yes, I think I should retract my last post in this thread.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT