RE: FAI (aka 'Reality Hacking') - A list of all my proposed guesses (aka 'hacks')

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Thu Jan 27 2005 - 22:38:26 MST


O.K, so my previous post was actually my second last
post ;) I'll just clarify something to Ben.

 --- Ben Goertzel <ben@goertzel.org> wrote:
> > MARC'S "GUESSES" ABOUT FAI AS AT JAN, 2005
> >
> > (1) What the Friendliness function actually does
> will
> > be shown to be equivalent, in terms of physics, to
> > moving the physical state of the universe closer
> to
> > the Omega point with optimum efficiency.
>
> Hmmmm...
>
> On the face of it, creating an AI that doesn't
> destroy humanity is a lot
> different problem than moving the universe toward
> the Omega point.

Is it? Perhaps you missed the thrust of my
conjecture? I'm hypothesizing that despite the fact
that they *seem* to be very different, they may
actually be one and the same. That is, my guess is
that one implies the other.

>
> But what you seem to be saying is: The best approach
> to serving humanity is
> NOT to focus on preserving humanity in the immediate
> future, but rather to
> focus on bringing about the Omega point, at which
> point humans will live
> happily in Tipler-oid
> relativistic-surrealistic-physics-heaven along with
> all other beings...

No, not so! See what I said above.

>
> According to your approach, a (so-called) "Friendly
> AI" might well wipe out
> humanity if it figured this was the best route to
> ensuring humans come back
> to frolic at the Omega point...

No!

>
> Well, sure, I guess so. But I'm tempted to put this
> in the category of:
> "After the Singularity, who the f**k knows what will
> happen? Not us humans
> with our sorely limited brains!"
>
> I don't place that much faith in contemporary
> physics -- it would be mighty
> hard to get me to agree to annihilate the human race
> in order to help
> manifest the Omega point, which to me is a fairly
> speculative extrapolation
> of our current physics theories (which I stress are
> *theories* and have
> never actually been tested in a Tipler-oid Big
> Crunch scenario -- maybe when
> that scenario actually happens we'll be in for some
> big surprises...)!
>
> -- Ben G
>

Not what I meant at all! I am speculating that the
goal of driving the universe towards the Omega Point
with optimum efficiently would actually, as a
consequence, lead to everything we would desire in a
Friendly A.I (like altruism towards humans etc).

Now I know this sounds really wacky and
counter-intuitive. On the face of it you would think
that the fastest way for an AGI to get to the Omega
Point would be simply to turn us humans into mush and
recycle our mass-energy towards the goal of reaching
the Omega Point.

But my speculation is that intuition is wrong and that
there are in fact really good reasons (which we don't
yet quite understand) why respecting human volition
and being nice to us would in fact be neccessery for
the goal of reaching the Omega Point as fast as
possible.

Do you see what I'm saying?

=====

Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT