RE: How hard a Singularity?

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jun 22 2002 - 12:08:33 MDT


hi brian,

> See Eugen this is one of the major complaints people around these parts
> probably have regarding your ideas. They are based on things you wish for,
> but don't seem to really exist or work in reality. I know you WISH we
> had working cryonics, perfect anti-aging and disease prevention tech, and
> everyone had their own mini space colony, but none of this seems likely
> to happen any time soon.

But brian, it is certainly plausible to say anti-aging drugs and cryonics
will work some time soon -- just as plausible as to say this about AI.

I happen to think that real AI can happen a lot faster than truly effective
anti-aging drugs. But these are for intuitive rather than rigorous reasons.
I think AI can happen soon because I think I know how to build one, and just
need some time and money to do it (the amount of time being decreased if the
amount of money becomes greater ;).

I think aging research will go frustratingly slowly, because human aging in
itself is not a rapid thing to study (it takes a long time to run tests
involving humans, and it's relatively slow even on other mammals).

I think cryonics could probably be mastered fairly quickly if a lot of $$
were put into it. the remaining problems have to do with things like
"inventing a very fast way of heating up tissue" and "creating drugs to
counteract the toxic effects of cryoprotectants," which intrinsically seem
like easier problems than those of AI or truly effectiveanti-aging.

Anyway, I don't think Eugen is guilty of wishful thinking regarding
cryonics, space travel or anti-aging, any more than Eli and you and I are
guilty of wishful thinking regarding AI!!

> Meanwhile, rather than admit that there just might be a /possibility/ of
> fixing all this via an AI technology that can be built and tested in
> such a way as to be likely less risky than letting human uploads run
> wild, you aren't interested in even seriously investigating.

I have thought about this very seriously and I think that superhuman AI is a
MORE risky path than human uploads. There are a lot more unknowns with
superhuman AI; we are dealing with a different sort of embodiment AND a
different sort of mind all at once.

However, I think the greater rewards are more than commensurate with the
greater risks.

>
> We do have differing reality models, and yours seems based on an utter
> surety that AI must go evil or uncaring (or at least we can't tell what
> will happen).

To me, utter surety that AI will go evil or uncaring is foolish. But so is
surety that AI can be made good or caring thru appropriate education or
engineering.

Accept the unknowability of the future. We must do the best we can, but,
should guard against overestimating the likely effectiveness of our
activities.

Anyway, all in all, I don't agree with Eugen's perspective either, Brian.
It seems to combine, among other features

a) skepticism about actually building real AI any time soon
b) kurzweil-ish optimism about eventual exact human brain simulation
c) fear of what real AI may do
d) faith in the ability of laws to restrain the advent of dangerous
technologies

I agree with him that exact brain simulation will be possible before too
long, and I agree with him that a bad outcome from superhuman AI is a real
possibility. But I doubt laws will be effective at constraining global tech
development, and of course I'm not an AI skeptic.

But I don't know why I'm spending time partially defending Eugen; if I
recall correctly, about a year ago he said he was putting me in his killfile
so he wouldn't have to read my e-mails anymore, with their offensive AI
optimism ;>

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT