Re: drives ABC > XYZ

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Tue Aug 30 2005 - 19:31:59 MDT


--- Phil Goetz <philgoetz@yahoo.com> wrote:

Michael Vassar wrote:
> Yes, but that's where this conversation *began*. We're already
> assuming that. The A B C -> X Y Z example shows how, one step at
> a time, the system can take actions that provide greater utility
> from the perspective of its top-level goals, that nonetheless end
> up replacing all those top-level goals.

I /think/ Goetz's point is that in practice the AI could be unable to
predict in detail what the results of a self-modification could be,
yet still decide that the predicted benefits are worth the risk of
an undesireable future version of itself existing. An omniscient
AI would never suffer from this problem, but it's possible in
principle to design sufficiently bizarre initial goal systems plus
environmental conditions that would realistic any realistic AI to
violate optimisation target continuity. I have no idea why anyone
would actually do this in practice, except maybe as a controlled
experiment carried out after we've finnished the more pressing
task of eliminating all the looming existential risks.

 * Michael Wilson

        
        
                
___________________________________________________________
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT