Re: How hard a Singularity?

From: Eugen Leitl (eugen@leitl.org)
Date: Sat Jun 22 2002 - 14:06:40 MDT


On Sat, 22 Jun 2002, Brian Atkins wrote:

> See Eugen this is one of the major complaints people around these
> parts probably have regarding your ideas. They are based on things you
> wish for, but don't seem to really exist or work in reality. I know
> you WISH we had working cryonics, perfect anti-aging and disease
> prevention tech, and everyone had their own mini space colony, but
> none of this seems likely to happen any time soon.

There's a misunderstanding. We've been both describing what should be
done in an ideal world. Reality typically has its own ideas what should
happen. Nevertheless, we owe it to ourselves and others at least to try.

Based on what I've seen cryonics doesn't have a very high probability of
success (in terms of hard limits like information-theoretic death, not in
terms of tissue viability). However, given that cryonics is so easy and
cheap to validate (why, it's all just about quantifying irreversible
information erasure at neuronal ultrastructure level in a modern
vitrification process) and the impact of working radical life extension
*today* it should be investigated with a high priority.

Eliezer is absolutely accurate: people are dying today, and working
cryonics could catch a fair fraction of them. Now.

You mention "perfect anti-aging". Perfect anything doesn't exist, and is
not actually needed. It definitely looks like calorie restriction works,
and we may be less than a decade away from a drug reproducing CR without
requiring cutting down on calories, which people find very hard to do.

If CR adds up to two decades longevity that means people will reach into a
region of more advanced medicine. In any case they have more options, not
less.

Disease prevention is the domain of classical medicine, which has been
doing nicely. If we can be sure of anything, it is that medicine will make
further advances. I fully expect to see adaptive personal molecular
therapies towards the end of my life.
 
> Meanwhile, rather than admit that there just might be a /possibility/
> of fixing all this via an AI technology that can be built and tested
> in such a way as to be likely less risky than letting human uploads
> run wild, you aren't interested in even seriously investigating.

This is wrong. I think it definitely needs investigation, but in a
controlled setting. I think that any naturally intelligent AI of up to
slightly below human level is extremely useful and rather safe and
something we definitely need, and be it for meek fabbing and transport.

A few things are intrinsically dangerous (free environment capable
molecular self-replicators (whether biological or machine-phase),
superhuman AI) which need to be investigated very carefully.

> We do have differing reality models, and yours seems based on an utter
> surety that AI must go evil or uncaring (or at least we can't tell
> what will happen). Perhaps this is why you never quite find the time
> to read CFAI. We've all certainly spent plenty of time trying to fully
> understand your reality model, but I'm not seeing that flexibility on
> your side.

I promise to read and comment on CFAI. Unfortunately, it seems there is
less and less free time to the day.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT