RE: Ben vs. Ben

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jun 29 2002 - 22:21:07 MDT


hi,

> Famous last words? You seem uncertain about so many other issues, but on
> this one you are so utterly sure that an outside organization had to pay
> you just to get your coders to spend a few weeks adding a simple takeoff
> notification warning system? I think somehow you are still missing my
> point. The point to reiterate is not about what your own intuition is
> regarding the risk. The point is to always imagine that you might be
> wrong, and if there is a relatively simple addition you can make to the
> design to reduce or eliminate the risk you should always do that. What
> is three weeks compared to that paper I referenced above?

The odds of the *current Novamente version* going superhuman, are on the
same order as the odds of my left elbow suddenly turning into a cauliflower,
or the next crepitation I exude accidentally incinerating the solar system.
I do not live my life or conduct my work based on paying serious attention
to events of such incredibly small probability!

I am uncertain about a lot of things, and I'm pretty highly certain about a
lot of things too. The discussion on this list tends to focus on really
deep, hard problems that I'm uncertain about....

> Good for you, although the .com era shows that some other people were not
> so good at taking risks with other people's property.

Well, Webmind Inc. took risks and lost. But the biggest risks we took were
in fact the ideas of our CEO, who was also our lead investor, and was
risking his own money. He was a risk-taking guy, having made his millions
as a trader in some fairly speculative markets....

However, in business all you're risking is money and time, whereas with AGI
the stakes are considerably higher, as we all know.

> Sometimes your ideas
> regarding embedding your own morality into your AI make me feel that you
> are doing almost the same type of thing: acting in a riskier and less
> critical fashion since you feel like there is going to be less or no risk
> to yourself due to the AI acting in accordance with your beliefs.

This is just waaaaay off. I think the risk to ME is exactly the same as the
risk to anyone else on Earth. None of the beliefs I intend to teach
Novababy involve placing ME in a privileged position above other humans.
They only involve placing HUMANS in an initially privileged position in the
AI's value hierarchy.

> > Of course. There is time for that, because Novamente has
> effectively zero
> > existential risk at the moment.
>
> In your opinion
>
> >
> > The design, if fully implemented, would in my view pose a real
> existential
> > risk, but it is just not there yet.
> >
>
> In your opinion
>
> You are certainly the wisest person on the planet to know all this with
> such certainty that you feel ok with playing dice with us all.

In each case, we can substitute "In my opinion, and that of everyone else
who has studied the codebase."

The idea that Novamente has more potential to *ever* be smarter than
Microsoft Word is *also* "just my opinion"... or rather, "just the opinion
of me and the others who have studied the codebase"

Can't you see that if the odds of a certain software system going superhuman
are *sufficiently low*, then no protective measures are necessary, or even
meaningful?

I could give you a long list of other people with would-be-AGI systems:
Peter Voss of A2I2, Pei Wang, Cyc,.... All these folks also have incomplete
would-be AGI systems, and all these folks also assess that their systems
have effectively no chance of going superhuman until much further coding
work is done on them.

I guess the reason you're pushing me on this issue, and not them, may partly
be that you suspect I have a slightly higher chance of success than these
guys. So I should be flattered.... I also think I have a higher chance of
success than these guys. But my feeling that I have a higher chance of
success than these others, though quite strong, is much, much weaker than my
very solid knowledge that the current code base *cannot go superhuman*.

-- ben g



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT