RE: Ben vs. Ben

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jun 29 2002 - 22:21:01 MDT


Brian,

Sheesh! This conversation is finally starting to get on my nerves. The
currently running Novamente version has no more chance of going superhuman
than Microsoft Word does. It's just running simple cognition and perception
algorithms on simple testing datasets. The idea that it needs to have
protective measures installed in it *right now*, is about as silly as the
idea of pooping in a bomb shelter to protect against the off chance that one
of your poops explodes.

There are software systems where it's a matter of opinion whether the thing
could go superhuman or not. In that case, I agree, one should certainly err
on the side of safety and put in protective measures. There are other
software systems, like the current highly incomplete Novamente version,
Microsoft Word, or Space Bunnies Must Die! (a terrible game, by the way), on
which there is really no room for disagreement -- the thing just can't go
superhuman. In these cases putting in protective code would be pure
silliness. I am sure that if Eliezer saw the codebase he would agree.

But I've already said this before. So I guess we're just going to keep
talking past each other, so we should probably give the argument a rest for
a little while.

Which is convenient since I'm going on vacation on Monday for 11 days
(though I'll still be checking e-mail occasionally during that period, I
won't have time for massive missives...).

It seems likely that later this summer I'll write up something a little more
systematic than the current "Thoughts on AI Morality" on my views. (Not
something as big or systematic as CFAI, but further in that direction.)
When writing this up I'll choose my words more carefully than I do when
writing e-mails, which may decrease the amount of misunderstanding.

-- Ben G

>
> Resources don't matter really. Whether you are one guy in a garage or
> a Manhattan Project, this really doesn't change how you should be looking
> at and addressing the risks. The only difference is how much realtime
> passes.
>
> >
> > Until we have a system that implements the autonomous,
> goal-based aspects of
> > the Novamente design, there is effectively zero chance of
> anything dangerous
> > coming out of the system.
>
> More famous last words
>
> >
> > It would be more useful for us to spend our time mitigating the
> existential
> > risks of nuclear or biological warfare, than to spend our time
> mitigating
> > the existential risks of the *current Novamente version*
> experiencing a hard
> > takeoff, because the latter just can't happen.
>
> Why not take the time now to think about and design on paper the various
> ways you expect to minimize risks in Novamente? Why is it you have time
> to sit down and do detailed design work on the seed AI part of Novamente
> but you can't also spend the comparatively less amount of time working on
> this additional stuff? Is it being driven by your plan to commercialize
> your effort?
>
> >
> > When we implement the autonomous goal-based part of the system
> and we have a
> > mind rather than a codebase with some disembodied cognitive
> mechanisms in
> > it, we will put the necessary protections in the codebase.
> >
> > I have now said this many times and it probably isn't useful
> for me to say
> > it again...
> >
>
> Perhaps if you say it enough times the existential risks will magically
> vanish.
> --
> Brian Atkins
> Singularity Institute for Artificial Intelligence
> http://www.intelligence.org/
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT