Re: AGI Prototying Project

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Sun Feb 20 2005 - 15:15:53 MST


> I may be misunderstanding him, but I take him to mean...
> that to be more useful than dangerous an AGI project must:
>
> 1) Be based on _some_ solid theory of Friendliness,
> 2) Have a grave respect for the potential dangers and a strong
> safety-first attitude.

Correct, where 'solid' means 'reasonably nice'. Mere baseline humans
picking moral rules that civilisation will be stuck with indefinitely
is almost certainly going to be suboptimal, but right now I'd take an
increased chance of avoiding existential disaster over the highest
chance of achieving perfection. However CV isn't actually much harder
to implement than your four favourite fixed moral principles (well,
for my best guess at how it will be implemented, Eliezer's still
working on the details), and it looks like a much safer bet.
 
> The last point is the more important one; I'm skeptical about
> whether CV is workable,

So am I; it's a good idea and I think it's more likely that humanity
will renormalise to something nice than not, but I am deeply skeptical
about computational tractability and irreducable spread. That's why I
would insist on a series of fallback plans in case CV didn't work,
preferably publically reviewed ahead of time.

 * Michael Wilson

        
        
                
___________________________________________________________
ALL-NEW Yahoo! Messenger - all new features - even more fun! http://uk.messenger.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT