Re: Augmenting humans is a better way

From: Brian Atkins (brian@posthuman.com)
Date: Sat Jul 28 2001 - 14:01:13 MDT


Thanks for the update. When I say "scary" I meant more from the point of
his view of how things might play out. Who can say exactly how it will play
out, but I feel he is taking the easy (or ignorant) way out by not also
worrying about a potential hard (or semi-hard :-) takeoff. In terms of
his scientific efforts, what I meant by scary was a more joking reference
to his poor design/theory (as you noticed).

But I agree, better him than a no-nothing (or worse).

Ben Goertzel wrote:
>
> > I'd much prefer us to get
> > the first real self-enhancing AI up and running rather than someone like
> > a Hugo de Garis who just scares me (from a scientific point of view too).
>
> Just a side comment here...
>
> I just spent a couple days with Hugo last month (in the emptied-out Starlab
> building -- a beautiful building by the way, see
> http://www.starlab.org/contactus/findus/ ), and I can assure you that while
> he's got some really interesting technical work going, he's *nowhere near* a
> workable path to a real AI.
>
> He think it'll be 50 years or so before we get a real AI. What he has now
> is a superpowerful hardware system for evolving neural nets by genetic
> programming. Some very cool aspects, such as a genotype/phenotype
> distinction (the genotype gives initial positions of neurons, and there's an
> epigenesis phase in which synapses grow, providing the phenotype, the actual
> neural net). A weakness is that fitness of an NN must be assessed by a list
> of given fitness cases, and can't be done by some function not easily
> encapsulated in a small table of cases (e.g. it can't be done by inference
> relative to a database of experience, as is the case for most procedure
> evolution/learning in the mind).
>
> As for his philosophical views, he believes that Friendly AI is possible,
> but that even if AI's are friendly, people are not, and they won't accept
> AI's, so there will be some kind of violent struggle between pro-AI people
> and anti-AI people. He believes that self-modifying AI can be a path to
> superhuman intelligence, but he believes this path will take several hundred
> years, and that during this time there is a decent change that the AI's and
> all humans are wiped out by stupid paranoid human violence.
>
> On a personal level, while Hugo is definitely a very eccentric individual in
> some ways, he was very friendly and took a lot of time out to talk to me in
> spite of being in the midst of a huge crisis situation (Starlab just went
> broke, he was out of a job and had no idea what he was doing next -- hmm, a
> very familiar situation to me actually ;p ). He offered his spare room to
> me during my visit to Brussels (for the Global Brain Workshop
> (http://pespmc1.vub.ac.be/Conf/GB-0.html, a gathering at which many
> Singularity-ish topics were discussed, although the "Singularity" phrase got
> little respect) even though we'd never met each other before in person, only
> discussed things thru e-mail.
>
> Frankly, although I think it's unlikely, I would *much* rather see the first
> real AI created by Hugo, who is basically a sweet guy who has thought deeply
> about the philosophical ramifications of AI, than by oh, say, a US military
> AI lab.... I don't think that mild-mannered eccentric scientists are our
> greatest worry by any means. Fortunately, at this point, the military and
> other powerful entities whose ethics I question, apparently have no interest
> in building real AI, because the academic establishment has convinced them
> it's a very very long way off still.
>
> -- Ben G

-- 
Brian Atkins
Director, Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT