Re: Static uploading is SL3 (was: the 69 of us)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Nov 18 2000 - 11:40:19 MST


Ben Goertzel wrote:
>
> > But I
> > can't see a transhuman upload as vulnerable to insanity, even
> > leaving aside
> > fine-grained control of simulated neurology or the ability to
> > self-modify. A
> > transhuman should simply be able to think faster than the forces that are
> > responsible for insanity in biological humans - see any problems
> > a mile away.
>
> Frankly, I think this is an excessively Utopian point of view

If so, it's an excessively Utopian view about something that I have absolutely
no responsibility to do anything about, which is the best kind of excessively
Utopian view.

> First: I don't think that insanity has to do with thinking fast or thinking
> slowly -- if so, why
> do intelligent people go nuts just as often as stupid people?

My instinctive response is to say "They don't", after which we could try to
look up the statistics... but that really wouldn't settle anything.

> Of course,
> you could say that
> insanity is cured by fast thinking only once thinking gets beyond a certain
> critical threshold
> of speed, but this seems like "special pleading"...

Not so much speed, but proficiency. The ordinary speed of thought of the
average human would be enough to outthink insanity *if* we had sufficient
knowledge of how our own minds worked. We're starting to get into that, but
we're not really there yet. It seems more like a matter of augmenting the
specific cognitive abilities of self-awareness than general intelligence. I
shalln't argue the point excessively, since a much stronger argument is - as
you say:

> Second: It's a better argument that a system with the ability to modify
> itself, would be able to
> fix the types of problems we refer to as 'insanity' -- or ask others to fix
> them.
>
> This second argument makes some sense. It's not fast thinking, but rather
> the ability to modify
> one's 'hardware', that will eliminate a lot of what we call insanity from
> transhuman minds...
>
> However, even self-modification isn't necessarily a magic cure for
> craziness. Much insanity is
> motivational and emotional in nature -- with cognitive consequences that
> follow from these underlying
> problems.
>
> I.e., why can't a transhuman get into a state where it's nuts and doesn't
> want to modify itself into
> "non-nuts-ness"?

The essential argument here is that a transhuman can *occupy* such a state,
but ve is unlikely to *reach* such a state. The transhuman would see it
coming; if ve didn't see it coming, there would be early warning signs,
flashing red lights on the exoself console, and so on. Before a transhuman
can reach such a state, ve would need to first need to desire such a state.
This desire can occur for one of three reasons: due to a new, existing mental
problem that should have been spotted, corrected, and prevented (recurse on
argument); due to an old mental problem imported from humanity; or for reasons
that are as philosophically valid as any other. The first case should be
preventable; the third case constitutes a sovereign case of individual rights;
the second case probably constitutes a sovereign case of individual rights as
well, but note that it would probably require the individual to deliberately
ignore warning lights.

In any case, the original argument was not that SOME transhumans might go
insane by choice, but that ALL transhumans would go insane inevitably; the
latter statement is almost certainly false.

> In this case, should other transhumans intervene and fix
> its mental structures, so it's
> not nuts anymore?

I don't know. Perhaps the answer will take the form of "Sometimes" rather
than "Yes" or "No".

> This will presumably almost always be possible (although
> there may be tough mathematical
> problems in determining how to tweak someone's mind to leave them with their
> "self" but not their insane
> features) ... at least to some extent... but even so this gives rise to
> serious ethical issues to do with
> individual freedom.
>
> I do not believe that you guys have resolved this issue in a definitive way.
> It seems, rather, that you've
> brushed the issue aside due to optimism and confidence in the power of
> intelligence (a habit that I also
> possess, to be sure).
>
> In my view, it's wrong to consider it likely that transhuman minds will be
> driven nuts by the blurring of
> the mind-reality boundary. But it's also wrong to say that transhumans will
> be intrinsically sane due to their
> intelligence and self-modifying abilities.

I think that transhumans will have the power to be intrinsically sane. Choice
is another issue, but it's not *my* choice - I don't think - so I'll stick
with my optimism and confidence.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT