Re: The Human Augmentation Strategy

From: Jack Richardson (jrichard@empire.net)
Date: Thu Jul 05 2001 - 19:47:32 MDT


Christian:

Thanks for your response to my message.

The suggestion I was trying to make (and not doing so very well) was that
the attainment of super-intelligence may be easier to achieve through human
augmentation rather than starting from scratch with today's machines.

I indicated a couple of areas that could be a starting point for
augmentation but it shouldn't be limited just to those. The idea is to move
individual humans beyond the limits placed on them by the weakness of their
memory and their brain's processing power.

Because our methods of augmentation today are quite limited, there is a
tendency to assume that our powerful computers are a better way to achieve
the goal.

However, I believe that the enormous research being done to develop very
small devices that can be placed in the body to correct medical problems
will give us the technology to augment humans along the lines I'm
suggesting. I'm looking for the AARP to push the longevity aspects of this
technology and also to fight off the bioethicists.

>From this perspective, AI research would be better focused on imagining how
these devices might work and how to establish a benevolent mode of
functioning by humans with the advanced powers. Research on the devices
could be conducted initially with animals in a humane way. I'll leave it up
to the researchers to test if the devices were really doing the job. My
guess on the devices are billions of nanobots with wireless communication
among themselves and the global network.

Augmented humans with a thousand times the memory and a thousand times the
processing power would be in the transhuman phase. As Eliezer believes,
super-intelligence would emerge shortly thereafter.

Regards,

Jack

----- Original Message -----
From: Christian L. <n95lundc@hotmail.com>
To: <sl4@sysopmind.com>
Sent: Wednesday, July 04, 2001 10:30 AM
Subject: Re: Friendly AI and Human Augmentation

>
>
> Jack Richardson wrote:
> >
> >At the same time, some of those involved in developing AI have the
> >optimistic view that the Singularity will arise in the relatively short
> >time of ten to twenty years. During that same period, the methods of
> >augmenting humans will develop rapidly and become much more
sophisticated.
> >
>
> If by "augmenting", you mean things like retinal scanning glasses, smooth
> and flexible wearables with broadband internet connections then I can
agree
> with you. But if you mean more intrusive technology such as implants, then
I
> am more skeptical. It might be technically feasible, but the moral panic
> from the "bioethicists" would make it impossible to augment humans in this
> fashion. Animals maybe, but not humans in such a short timeframe.
Remember:
> cloning is "morally repugnant and against human dignity".
>
>
> >The optimistic view that the Singularity will arise out of AI development
> >on computer hardware assumes that the complexities of human intelligence
> >can be replicated on machine hardware without any insoluble problems
> >standing in the way. Historically, at least so far, this has not turned
out
> >to be the case.
> >
>
> If that were the case historically, we would have AI:s among us, no? If
you
> say that the case is NOT that we don't have any insoluble problems,
> logically there must exist an insoluble problem in creating AI. Which
> problem is it? :-)
>
>
> >Since there are real risks in whatever route we take towards the
> >Singularity, once it begins to be perceived as a possibility by the
larger
> >population,
> >
>
> I believe there is a good chance that it will never be perceived as a
> possibility by the larger population.
>
>
> >it is highly likely there will be a massive reaction with the kind of
> >protests we are seeing today towards the biotechnology companies.
> >
>
> Yes, and on a much larger scale: the biotech companies are not a threat to
> the national security of every country on earth, which incidentally
> superintelligent AI is. The "massive reaction" will probably not only come
> from militant luddites and anti-[favourite evil here] people, but also
from
> powerful governments. Since the singularity community consists of only a
> handful of people, the result of a confrontation is clear.
>
>
> >Without the convincing demonstration of the reliability of friendly AI
> >controls,
> >
>
> This looks like the infamous "precautionary principle": If you cannot
prove
> that it is harmless, ban it. This principle is much liked by the luddite
> community because it is logically impossible to prove that something is
> harmless, so you can ban just about anything with this principle.
> The moral of the story: you can never convince the world leaders that
> "friendly AI controls" is guaranteed to prevent your SI from converting
the
> earth to computronium. And even if you can convince them that we will have
a
> Sysop scenario: Why would they want this? Why would they give up their
power
> to a machine? In the subconscious mind, the Sysop is a big fat male
> competing for food and mates with the power to grab ALL food and ALL mates
> for itself. Who would want such competition?
>
>
> >it may be impossible to continue to conduct open AI research.
> >
>
> The AI research that has the stated goal of constructing superintelligent
AI
> should not be open or at the very least, not evangelized. At the moment,
> only a few people take this work seriously, but as time progresses, the
need
> for secrecy might be more apparent.
>
> /Christian
>
>
> _________________________________________________________________________
> Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT