Re: Fighting UFAI

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Thu Jul 14 2005 - 00:02:48 MDT


>Unless your positing that a boundary exists such that
one cannot be
>smarter than the boundary without also being nice,
and that boundary
>is *above* the level of intelligence that humans have
access to.
>
>-Robin

That’s exactly what I’m positing Robin! (re-read what
I said)

Eli says that intelligence cannot constrain the goal
system. But he doesn’t seem to have considered the
possibility that:

 THE GOAL SYSTEM CAN CONSTRAIN THE INTELLIGENCE
LEVEL!!!
 
I’m positing that an unfriendly cannot self-improve
past a certain point (i.e it’s intelligence level will
be limited by it’s degree of unfriendliness). I posit
that only a friendly ai can undergo unlimited
recursive self-improvement.

>You can have as much objective morality as you like,
but if it's easy
>to ignore, then you might find it does you no good at
all when the
>revolution comes... If UFAI doesn't have morality as
a goal, where
>morality is defined as some kind of scriptural
designation of good
>things and bad things, such as valuing human life,
rather than being
>directly tied to the pleasure principle, then we're
still screwed.
>
>Plato argued for an objective morality based on the
best satisfaction
>of the pleasure principle, which he argued came about
through
>self-discipline, moderation and fulfillment. But he
was still killed
>by pissed off Athenians.
>
>Cheers,
>-T

*sigh* I’ve argued the case for objective morality on
transhumanist lists for years. No one listens. But
gradually my arguments have been growing stronger.
About 1-2 months ago my theory took a quantum leap.
Still nothing water-tight though. I’ve stopped
debating it because I can see that only a precise
mathematical theory with all the t’s crossed and i’s
dotted is going to convince this crowd. Ah well.

To cut a long theory short…

I think the goal system constrains the intelligence
level. An unfriendly cannot exceed a certain level of
smartness. Only a friendly can undergo unlimited
self-improvement. Past a certain level of smartness,
I’m hoping an unfriendly goal system will always be
*jammed* by computational intractability/instability,
or both.

Cheers!

---
THE BRAIN is wider than the sky,  
  For, put them side by side,  
The one the other will include  
  With ease, and you beside. 
-Emily Dickinson
'The brain is wider than the sky'
http://www.bartleby.com/113/1126.html
---
Please visit my web-site:
Mathematics, Mind and Matter
http://www.riemannai.org/
---
Send instant messages to your online friends http://au.messenger.yahoo.com 


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT