Re: Ben vs. Ben

From: Brian Atkins (brian@posthuman.com)
Date: Sat Jun 29 2002 - 19:13:37 MDT


Ben Goertzel wrote:
>
> The difference of opinion between us seems to be that I think there will be
> a moderately long phase in which we have an AGI system that:
>
> a) has an interesting degree of general intelligence, suitable for
> experimenting with and learning about AGI
>
> b) has no chance of undergoing a hard takeoff
>
> You and Eliezer seem to assume that as soon as a system has an
> at-all-significant degree of general intelligence, it's a nontrivial hard
> takeoff risk. As if, say, a "digital dog" is going to solve the hard
> computer/cognitive science problems of optimizing and improving its own
> sourcecode!
>
> Maybe under Eli's design for AGI, a) and b) are not compatible, but in
> Novamente they are.

You /think/ they are, you mean. BTW, have you read this paper?
http://www.transhumanist.com/Waste.htm

>
> I think we have confidence about different things. You and Eli seem to have
> more optimism than me that simple hard-takeoff-prevention mechanisms will

We have little optimism about any of this. In fact we still worry something
will go wrong even with that little thing we are going to be paying you to
do. Our goal is to OVERengineer so many safety mechanisms in our design
such that we never even come close to exceeding some "maximum risk level".
I think this is all in CFAI somewhere.

>
> > I'm glad of your uncertainty, but you're not handling it like you would
> > rationally handle it in the case of an existential risk- you're
> > handling it
> > more like you would starting a business with someone else's
> > money, and if it
> > doesn't work out then "oops, oh well". Not good enough
>
> While I understand the need to temper my natural entrepreneurial,
> risk-taking streak in these matters, I think your criticism is a bit too
> strong here. You need to understand that my estimate of the current
> existential risk of Novamente having a hard takeoff is really
> infinitesimally small. That is why I do not take the risk seriously. This
> risk is vastly less than the risk of nuclear war breaking out and killing
> everyone for example. As Novamente progresses a real existential risk will
> one day emerge, and *as we get closer to that that point* (before actually
> reaching it!) I will start taking it very seriously.

Famous last words? You seem uncertain about so many other issues, but on
this one you are so utterly sure that an outside organization had to pay
you just to get your coders to spend a few weeks adding a simple takeoff
notification warning system? I think somehow you are still missing my
point. The point to reiterate is not about what your own intuition is
regarding the risk. The point is to always imagine that you might be
wrong, and if there is a relatively simple addition you can make to the
design to reduce or eliminate the risk you should always do that. What
is three weeks compared to that paper I referenced above?

>
> By the way, I treat starting a business with someone else's money the exact
> same way I treat starting a business with my own money. In fact I was a lot
> more fiscally conservative than some of our investor/executives at Webmind
> Inc. I am not afraid to risk my own cash or my own time (or even if it came
> to it, which is hasn't yet, my own life) on my ideas, not at all. My life
> history shows this pretty well.

Good for you, although the .com era shows that some other people were not
so good at taking risks with other people's property. Sometimes your ideas
regarding embedding your own morality into your AI make me feel that you
are doing almost the same type of thing: acting in a riskier and less
critical fashion since you feel like there is going to be less or no risk
to yourself due to the AI acting in accordance with your beliefs.

>
> > Any other legitimate things Eliezer or others pointed out to you privately
> > or publicly should be addressed. The issue should be looked at from all
> > sides. Three times. Then look at it again.
>
> Of course. There is time for that, because Novamente has effectively zero
> existential risk at the moment.

In your opinion

>
> The design, if fully implemented, would in my view pose a real existential
> risk, but it is just not there yet.
>

In your opinion

You are certainly the wisest person on the planet to know all this with
such certainty that you feel ok with playing dice with us all.

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT