RE: Dangers of human self-modification

From: Ben Goertzel (ben@goertzel.org)
Date: Mon May 24 2004 - 14:25:37 MDT


Fudley,

Put concisely, one of the main problems is: If you're modifying a human
brain to be twice as smart, how can you be sure your modification won't
have the side-effect of causing that human brain to feel like
irresponsibly creating dangerous seed AI's or gray-goo-producing
nanotech?

Human brain mods that don't increase intelligence dramatically are
relatively safe in existential terms, but human brain modes that do
increase intelligence dramatically are potentially dangerous by virtue
of the dangerous tech that smart humans may play with.

I'm not saying that smart humans will necessarily become evil or
careless -- in fact I think the opposite is more closely true -- but
it's clear that it will be hard to predict the ethical inclinations and
quality-of-judgment of intelligence-enhanced humans.

-- Ben

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf
> Of Eliezer Yudkowsky
> Sent: Monday, May 24, 2004 4:13 PM
> To: sl4@sl4.org
> Subject: Re: Dangers of human self-modification
>
>
> fudley wrote:
>
> > On Sun, 23 May 2004 "Eliezer Yudkowsky" <sentience@pobox.com> said:
> >
> >>I think it might literally take considerably more caution to tweak
> >>yourself than it would take to build a Friendly AI
> >
> > Why wouldn't your seed AI run into the same problem when it
> tries to
> > improve itself?
>
> Because I would have designed that mind to handle those problems, in
> exactly the way that natural selection did *not* design human
> beings to
> handle those problems. Self-modification prefers a mind
> designed to handle
> self-modification, just as swimming in the core of the sun
> prefers a body
> designed to swim in the core of the sun.
>
> --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT