How Kurzweil Lost the Singularity

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Jun 21 2002 - 14:04:34 MDT


Here is a message I just posted to kurzweilai.net

***

Eliezer & Ray...

[First a note to readers: bits and pieces of this thread between Eliezer,
myself and others actually occurred on the SL4 e-mail list, and were later
(with our permission) posted to this kurzweilai.net forum. This may lead to
a little pragmatic oddness....]

Now, I have a few scattered followup points to make..

1)
Eliezer: I think that Ray's response to your e-mail shows that my reading of
his attitude was largely correct, and yours was largely not correct. The key
point is: Ray does want to do what's possible to ensure the Singularity
comes out well, he just has a different opinion than you regarding the
priorities of various actions aimed at ensuring the Singularity comes out
well. (My own opinion on this is closer to yours than Ray's but not
identical to yours either.)

2)
Ray: I actually didn't realize that the bulk of your work was focused on
technology dev. these days. Interesting! You've been doing so much popular
writing and speaking, I sorta assumed that must be taking up the bulk of
your time, but I shouldn't have underestimated your ability to multitask!

3)
I don't think it's right to underplay differences between the perspectives
between those of us in the "Singularitarian" movement (using this term very
broadly): of course, differences should be openly and vigorously debated, in
the interest of advancing understanding.

But nor should we overemphasize our differences. We don't need a
Singularitarian sectarianism.... We need to accept that we're all ignorant
of the nature of what's to come, and that it's absolutely *to be expected*
that different people who "get" the Singularity vision are going to have
different intuitions about the details...

4)
Ray: As you know, I think you're wrong about AI in some ways, but I think
our difference of opinion here is not a deep qualitative difference, but
rather a quantitative difference in probabilities we assign to various
possibilities.

I think it's *very likely* that human-level and then transhuman AI can be
created prior to detailed mapping of the human brain, via a synthesis of
ideas from CS, cog psych, neuroscience, philosophy of mind, and other
disciplines. As you know, this is the focus of my life's work.

As I understand your attitude, on the other hand, you think it's *possible
but unlikely* that human-level or transhuman AI can be created prior to
detailed mapping of the human brain's structure and dynamics.

So, as I understand it, your attitude does go a little beyond the use of "in
silico brain emulation" as an *existence proof* for the possibility of real
AI. It seems to me that you also believe this (or some fairly close
approximation thereof) to be the *most likely route* to the creation of real
AI.

And I think you're wrong on this -- but I accept that this is a valid
difference of intuition. I certainly have no proof as yet that my own design
for a real AI will work, nor does Eli or anyone else have proof about their
would-be real AI designs. Differences of intuition on such matters are
obviously to be expected!

5)
Eliezer: About the "Ray has lots of money, so why doesn't he use it to fund
this or that important line of research" theme, I think that a little more
understanding of the financial situation of wealthy individuals is in order.
Ray Kurzweil is not as rich as Bill Gates, and he has a lot of his own R&D
to fund! It seems to me that Ray is allocating his money in a way that is
consistent with the greatest future good of humanity and sentience
*according to his own intuitions and beliefs.* That's more than most wealthy
individuals do, isn't it?

Personally, of course I would like to see my own AI work amply funded by Ray
Kurzweil or anyone else with the bucks. [see www.realai.net for contact info
to make donations!!]

But, putting myself in Ray's shoes, I'm quite sure that, if I were wealthy,
I would rapidly come into contact with thousands of people with great ideas
for what to do with my money. And I'd have to pick and choose very, very
carefully and selectively according to my own intuition (which is different
from Ray's, and surely also imperfect!)

6)
Eliezer: I agree with you that the best way to ensure a good Singularity is
to create a good "real AI" ASAP. I also agree with you that after "real AI"
is achieved, things are gonna pretty rapidly escalate into some kind of
"leap into the total unknown," as opposed to the "at each stage it will just
seem like ordinary life" scenario that Ray posits.

However, if we can't convince Ray of these things -- Ray, with his
sympathetic patternist philosophy and Singularitarian futurism -- how the
hell can we expect to convince the average scientist, let alone the bulk of
philanthropists or granting organizations?

We have to accept that our ideas about the Singularity are not carefully
grounded in scientific fact, they are to some extent speculative intuitions;
we cannot reasonably consider others unreasonable for disagreeing with us
;.>

An important question is: How to make a more solid, generally
convincing-to-others case that our perspective on the Singularity is a
highly plausible one?

I don't know the answer to this question. Hence my own approach continues to
be

a) to work toward building real AI, according to my design that I believe
will work (and that you are on record stating will not work!), with whatever
funding and donated effort can be cobbled together.

b) to seek funding for my real AI work, not for my particular vision of the
Singularity (although the two are closely connected in my own mind)

-- Ben Goertzel



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT