From: Ben Goertzel (email@example.com)
Date: Mon Jun 05 2006 - 00:07:54 MDT
On 6/4/06, Eliezer S. Yudkowsky <firstname.lastname@example.org> wrote:
> Amara D. Angelica wrote:
> >>on the topic of global catastrophic risks of Artificial Intelligence,
> >>there is virtually no discussion in the existing technical literature.
> > What about de Garis' The Artilect War?
> > http://www.cs.usu.edu/~degaris/artilectwar2.html
> Not technical.
It is true that Hugo's book is not technical, but then, your two
recent essays are not really technical either, at least not according
to my understanding of the term...
> (Incidentally, I recently met de Garis at Ben Goertzel's AGI conference.
> De Garis had never encountered the concept of Friendly AI and was
> visibly shocked by it. We shall have to see what results from that.)
I talked to Hugo about FAI both before and after the workshop (he
stayed in Maryland for 2 weeks and we had plenty of time to talk).
>From what he said to me, it is clear that his "shock" was basically
shock that any intelligent and relevantly-knowledgeable person would
think FAI would be possible. He considered it very obvious that once
one of our creations became 8 billion times smarter than us, any
mechanism we had put into it with a view toward controlling its
behavior would be completely irrelevant....
[Note: I'm not expressing agreement with him, just pointing out the
strong impression I got regarding his view.]
He appeared willing to be convinced that FAI is possible, but
according to my judgment, nothing you have written so far (including
your recent essays) would come remotely close to convincing him...
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT