Re: Fighting UFAI

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Wed Jul 13 2005 - 23:44:28 MDT


On Thu, Jul 14, 2005 at 05:18:56PM +1200, Marc Geddes wrote:
> *sigh*
>
> How many times do I have to tell you all...
>
> there... is...no...threat...from ... unfriendly..ai
>
> objective...morality....exists
>
> I swear I'll prove it if it's the last damn thing I ever do

Umm. It's not *relevant*.

The fact that a global maxima exists does *not* mean that any given system
will reach it. Local maxima can be *very* compelling.

> I repeat my guess again:
>
> *Computational intractability is what will always stop an ufai
> from endless self-improvement. I think any ufai can only improve
> to a point before being *jammed* by intractability. So yes, I
> think that unfriendly ai is possible, but only of a kind that is
> limited in intelligence.
>
> Objective morality does not constrain the *content* of the goal
> system, but suppose it constrains the *structure* ? (the process
> of acting upon the goal system). What if objective morality will
> always *jam* an unfriendly goal system by hitting it with
> computational intractability?
>
> So, an unfriendly ai cannot recursively self-improve past a
> certain point. Only a friendly ai can. That's my story and I'm
> sticking to it.

The history of life on earth would seem to indicate that if such a
thing is true (and I find it *utterly* insane, fwiw), the objective
morality is *extremely* unfriendly.

Unless your positing that a boundary exists such that one cannot be
smarter than the boundary without also being nice, and that boundary
is *above* the level of intelligence that humans have access to.

-Robin

-- 
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute - http://intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT