RE: How hard a Singularity?

From: Ben Goertzel (ben@goertzel.org)
Date: Tue Jun 25 2002 - 11:02:03 MDT


Eli wrote:
> Ben, Nick Bostrom has already written a formal analysis of this
> one, coming
> to the same conclusion; humanity's future is larger than its present, so
> *IF* there is a conflict, the moral thing is to increase the
> probability of
> a (safe) Singularity even at the expense of time-to-Singularity.
> *BUT* the
> only consideration I know of in which spending more time could
> conceivably
> buy you anything at all in the way of safety is the Friendly AI
> part of the
> AI problem

More time could also buy us one other thing. It could buy us time to
reflect on the Singularity more thoroughly -- collectively, as a species --
thus perhaps discovering other ways in which a delay could be helpful, which
are not apparent to us now ;)

I mostly like Nick Bostrom's thinking on the Singularity, and I mostly like
yours as well, but I am not confident that even folks as smart as you, me
and Nick and the other folks on this list have fully understood the various
aspects of "what humanity could do to make the Singularity work out better."

Personally I am not NOW advocating trying to slow down the Singularity. But
as things get closer, I'll certainly be watchful for evidence that this
makes sense. (Although I recognize that even if I should, in the future,
decide the Singularity should be slowed down, I may well lack the power to
do anything about it.)

I don't want to die, I don't want my children or any other living beings to
die (unless, like my wife, they wish to die when their biologically allotted
time is up), and I don't want to be stuck in this limited human form with
this limited human intelligence forever. I want the Singularity.

But I also don't trust YOUR, or MY, or anyone else's theory of "how to make
AI's friendly" or "how to make the Singularity come out well." It worries
me that you are so confident in your own theory of how to make the
Singularity come out well, when in fact you like all the rest of us are
confronting an unknown domain of experience, in which all our thoughts and
ideas may prove irrelevant and overly narrow.

It would be nice to approach the Singularity with a far better understanding
of how to make Singularities come out nicely.

One thing to hope for is that, once we create
slightly-more-than-human-level-intelligence AGI's, these AGI's will help us
to arrive at a better understanding of these issues -- and help us sculpt
the Singularity in ways that will make it beneficial for us and for them as
well as for their successors.

This has a better chance of occurring if the takeoff is semihard (i.e.
exponential, but with a smaller exponent than you conjecture).

> In all other cases, the question is unambiguous; the sooner you
> make it to
> the Singularity, the safer you are. A multipolar militarized
> technological
> civilization containing solely human-level intelligences is just not safe.

Yes, if you assume the Singularity will come out well, then the sooner you
make it there, the better off you are. Obviously.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT