Re: Building a friendly AI from a "just do what I tell you" AI

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Mon Nov 19 2007 - 16:58:50 MST


On Tue, Nov 20, 2007 at 10:20:05AM +1100, Stathis Papaioannou wrote:
> On 20/11/2007, Peter de Blanc <peter@spaceandgames.com> wrote:
> > On Mon, 2007-11-19 at 22:06 +1100, Stathis Papaioannou wrote:
> > > An AI need not think in any particular way nor have any
> > > particular goal. But if it is superintelligent, figuring out
> > > the subtleties of human language and what we call common sense
> > > should number amongst its capabilities. If not, then it
> > > wouldn't be able to manipulate people and would pose much less
> > > of a threat.
> >
> > Just because an AI can model your goals and thought patterns
> > does not mean that they are part of the AI's goal content.
>
> No, but insofar as you have any control over the goals of the AI,
> making it understand you should be on the list before anything
> else.

Well, high up the list, anyways. Let us know when you have a nice
mathematical proof that your AI will continue to "understand you"
even in the face of self-improvement.

To get some idea of how Insanely Hard this is, read
http://www.overcomingbias.com/2007/10/double-illusion.html

-Robin

-- 
Lojban Reason #17: http://en.wikipedia.org/wiki/Buffalo_buffalo
Proud Supporter of the Singularity Institute - http://intelligence.org/
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT