RE: SL5

From: Ben Goertzel (ben@intelligenesis.net)
Date: Tue Nov 21 2000 - 16:11:51 MST


Hi,

I knew that e-mail wasn't clearly enough formulated.

There is a deep & detailed argument underlying my statements, which has to
do with the nature
of pattern recognition ... I don't have time to clarify right now, but will
definitely do so within
the next couple days...

ben

> On a deeper level, I disagree with the entire visualization this implies.
> Why can't we just say that transhumans can *deal* with it when their basic
> assumptions get challenged? That they don't have the *hardware* for
> running around in circles, biting their own tails; that they just deal
> with it, whatever it is, and move on. Then you can optimize whatever you
> like; if something goes wrong, you recognize it, de-optimize, and move on;
> easy as clearing a cache.
>
> As humans, we get extremely emotionally attached to our own ideas. This
> happens for several reasons, of course; the two major ones are (a) the
> political emotions, under which backing down from an idea is not only a
> truth/falsity thing but also affects your social status; and (b) we have a
> really lousy, hacked-up pleasure/pain architecture that causes us to
> flinch away from painful thoughts. "Insanity" *and* "inability to deal
> with change" are not emergent phenomena that will appear in all
> sufficiently complex minds. Calling a phenomenon "emergent" always sounds
> really enlightened, I know; but in this case, it's just not true - at
> least, as far as we know. Our vulnerabilities are traceable directly to
> specific design errors, and there is really no reason to think that these
> vulnerabilities would be present in minds in general.
>
> -- -- -- -- --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT