Re: drives ABC > XYZ

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Tue Aug 30 2005 - 20:51:37 MDT


> Michael Wilson wrote:
>> sufficiently bizarre initial goal systems plus environmental
>> conditions that would realistic any realistic AI to violate
>> optimisation target continuity

My apologies, I'm working late and not concentrating properly on
email. To read 'that would /cause/ any realistic AI...'.

> Seems remarkably similar to the risks being undertaken by primitive
> human, unable to predict in detail what the results of creating AI
> could be, yet still deciding the risks are worth it...

The difference being that an AGI is a lot more likely to be rational
and thus avoid all the horrible systematic errors humans have in
(over)estimating the accuracy of our predictions and the utility of
taking risks. As such I would expect an AGI with the goal of building
an FAI (such as 'suck the definition of Friendliness out of the
original programmer's brain and instantiate it', don't try that one
at home) to be rather more keen on developing techniques to prove
Friendliness than the average human AI researcher.

 * Michael Wilson

                
___________________________________________________________
How much free photo storage do you get? Store your holiday
snaps for FREE with Yahoo! Photos http://uk.photos.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT