RE: Ben vs. Ben

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jun 29 2002 - 17:36:03 MDT


hi brian,

> > 2)
> > It's important to put in protections against unexpected hard
> takeoff, but
> > the effective design of these protections is hard, and the
> right way to do
> > it will only be determined thru experimentation with actual AGI systems
> > (again, experimental science)
>
> This is not good enough. No AI project should find itself in the situation
> of both being in a potential takeoff situation, and simultaneously having
> no mechanisms to prevent a takeoff. If you can't figure this out, then you
> should never run your code in the first place. To me, this looks like
> another case of your overoptimism (which is the exact opposite of what is
> required when dealing with existential risks- you need to practice walking
> around all the time expecting doom) leading to unnecessary risks.

Well, Brian, on this point I am willing to partially concede that you're
right ;)

The difference of opinion between us seems to be that I think there will be
a moderately long phase in which we have an AGI system that:

a) has an interesting degree of general intelligence, suitable for
experimenting with and learning about AGI

b) has no chance of undergoing a hard takeoff

You and Eliezer seem to assume that as soon as a system has an
at-all-significant degree of general intelligence, it's a nontrivial hard
takeoff risk. As if, say, a "digital dog" is going to solve the hard
computer/cognitive science problems of optimizing and improving its own
sourcecode!

Maybe under Eli's design for AGI, a) and b) are not compatible, but in
Novamente they are.

I think we have confidence about different things. You and Eli seem to have
more optimism than me that simple hard-takeoff-prevention mechanisms will
work. And I seem to have more confidence than you that there will be a
period of infrahuman AGI in which the risk of hard takeoff is very very very
very low, in which all sorts of things to do with computer consciousness,
hard takeoff prevention, intelligence measurement and AGI in general can be
studied.

I can accept that perhaps I'm overoptimistic, and left to my own devices I
might wait too long to put appropriate protection in the system. But the
Novamente project involves a diverse crew, and it's incredibly unlikely that
a serious hard takeoff risk would be created by us without *one* of the team
members wanting to put in protection. So an error in judgment on my part
would not be enough to cause danger, as it's not an autocratic project at
all, it would have to be an error in judgment among 10 people all of whom
understand these issues pretty well, and have different perspectives on it.
and this is just talking about the Novamente team, not any advisors that
have been assembled due to the team's perception of the plausibility of soon
entering a hard takeoff situation.

Anyway, having said all that, the bottom line is: On this point I am willing
to partially concede. Perhaps my past attitude has been too cavalier in
this regard. Based on the concerns of you, Eli, James and others, I will
put a significant "unexpected hard takeoff prevention" system into the
system significantly before my intuition tells me to. I'll put a very
simple such system in now, even though it's not useful yet, and put in a
more serious such system together with the goal system when the goal system
is implemented. (As a hard takeoff is essentially impossible before the
goal system is implemented; the system is really just a problem-solving
engine, not a mind, before that point.)

> > 3)
> > Yes, it is a tough decision to decide when an AGI should be allowed to
> > increase its intelligence unprotectedly. A group of Singularity wizards
> > should be consulted, it shouldn't be left up to one guy.
> >
> > MAYBE I will also replace the references to my own personal
> morality with
> > references to some kind of generic "transhumanist morality."
> However, that
> > would take a little research into what articulations of transhumanist
> > morality already exist. I know the Extropian stuff, but for my
> taste, that
> > generally emphasizes the virtue of compassion far too little....
>
> Speaking as a human who is potentially affected by your AI, this isn't
> good enough for me. You'll have to come up with a better answer
> before I'll
> willingly go along with such a plan.

This is a point that I'm far less willing to concede on, Brian.

It seems to me that you guys don't really accept the diversity of human
ethical systems. There simply is no consensus ethics in the human race.
There are a lot of people who think that creating computers displaying
apparent consciousness is immoral, that using computers for drug discovery
or life extension is immoral, etc. etc.

It is not possible to teach Novababy a "universal human morality or ethics"
because no such thing exists. A particular choice has to be made, just as a
choice has to be made when raising a child, which particular morals to
inculcate them with.

The best you're going to get on this score is an invitation to participate
in teaching Novababy when it's ready for that. Then your own particular
twist of the human ethical code will get to play a role in Novababy's
initial condition too. Note that I'll invite you for this because your code
is reasonably similar to my own. Osama bin Laden will probably not get an
invitation, although he is just as human as us and has his own moral code,
roughly shared by at least tens of millions of others.

I'm sorry if this makes you uncomfortable. The alternative proposed in
CFAI, insofar as I understand it, makes no sense to me, after repeated
readings and discussions. A lot of the CFAI document does make sense to me,
but not this aspect.

> I'm glad of your uncertainty, but you're not handling it like you would
> rationally handle it in the case of an existential risk- you're
> handling it
> more like you would starting a business with someone else's
> money, and if it
> doesn't work out then "oops, oh well". Not good enough

While I understand the need to temper my natural entrepreneurial,
risk-taking streak in these matters, I think your criticism is a bit too
strong here. You need to understand that my estimate of the current
existential risk of Novamente having a hard takeoff is really
infinitesimally small. That is why I do not take the risk seriously. This
risk is vastly less than the risk of nuclear war breaking out and killing
everyone for example. As Novamente progresses a real existential risk will
one day emerge, and *as we get closer to that that point* (before actually
reaching it!) I will start taking it very seriously.

By the way, I treat starting a business with someone else's money the exact
same way I treat starting a business with my own money. In fact I was a lot
more fiscally conservative than some of our investor/executives at Webmind
Inc. I am not afraid to risk my own cash or my own time (or even if it came
to it, which is hasn't yet, my own life) on my ideas, not at all. My life
history shows this pretty well.

> Any other legitimate things Eliezer or others pointed out to you privately
> or publicly should be addressed. The issue should be looked at from all
> sides. Three times. Then look at it again.

Of course. There is time for that, because Novamente has effectively zero
existential risk at the moment.

The design, if fully implemented, would in my view pose a real existential
risk, but it is just not there yet.

-- ben g



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT