AI boxing

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Sep 17 2004 - 05:41:39 MDT


Hi,

I tend to keep quiet during these AI-BOX discussions, but I'm moved to chip
in some brief comments at this point, motivated by an offline conversation
with Chris Healey.

As I see it, the real question is whether we want to create an AI that's
smarter than us and let it influence human affairs in a significant way.
Whether we let it have direct physical control over the world outside its
box is really close to irrelevant.

More explicitly:

1)
It's just silly to ask whether this or that human could keep the AI in the
box if they *really really wanted to above all else*. If some human is
fanatically convinced that letting the AI out of the box is bad, of course
they may be able to sit there, ignore what the AI says, and leave it in the
box. Sure, humans are capable of all kinds of actions, noble and ridiculous
and good and bad, etc. So what?

2)
If we're going to create an AI and put it in a box and keep it from having
any influence on the world under any circumstances, then why bother? Just
out of some kind of bizarre cross-species sadism? ;-) But if we're going
to create an AI and let it interfere with our world *indirectly* -- via
suggesting technologies to us, helping us invent new math, etc. -- then does
it really matter whether we literally let it out of the box or not? A
sufficiently smart creature will find some way to influence us anyway, via
the technologies it creates for us, and so forth. Thus the issue of letting
the AI out of the box becomes a judgment call, assuming one has already made
the judgment that creating an AI and letting it influence the world is a
worthwhile thing.

In short, it really makes no sense to create an AI, allow it to indirectly
affect human affairs, and then make an absolute decision to keep it in a
box.

And it also makes no sense to create an AI and not allow it to affect human
affairs at all, under any circumstances. This is a waste of resources.

So creating an AI-BOX may be a useful interim strategy and conceivably a
useful long-term strategy, but it's not something we should expect to count
on absolutely.

Thus I suggest that we spend our time discussing something else ;-)

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT