Re: Effective(?) AI Jail

From: Gordon Worley (redbird@rbisland.cx)
Date: Wed Jun 13 2001 - 08:18:23 MDT


At 2:41 AM -0500 6/13/01, Jimmy Wales wrote:
>Brian Atkins wrote:
>> Well here you run into the familiar (should be familiar to you by now)
>> problem of existential risks. We most likely can't know the answer to this
>> issue either way until we actually could test such a situation out. And
>> even then you run a big risk if it turns out that Jimmy/James is wrong. So
>> what I'm trying to say (along with Eliezer) is you have to be conservative
>> when it comes to these kinds of risks, and make choices based more on what
>> can go wrong, even if the perceived probabilities are low.
>
>Sure, but we also have to be aware that Pascal's argument for the
>existence of God (which has the same structure) is fallacious.

Not trying to get into a religious debate with you (and Powers know
that I don't want to do that ;-)), Pascal didn't exactly have any
rational evidence for the existance of god and he knew it, too. We,
on other hand, have rational reasons and suggestive evidence (we
don't want actual evidence, because that means us getting hosed) that
an unFriendly AI would be the end of us all.

>Conservatism is one thing -- but being paranoid to the point that we
>fail to act is quite another.
>
>If we're that worried about it, perhaps we should stop being advocates
>for the singularity and start a terrorist organization to do everything
>we can do stop technological progress, right?

Who would you prefer to trust: the (Luddite) or the (idiot with
technology)? I'll take the Luddite any day. We must do our best to
be intelligent when it comes to technology and risk assessment until
such time that we develop something smart enough to handle bigger
risks.

Also, in case you were wondering, terrorism isn't very Friendly, so I
don't think we'd do that. Maybe just tell people politely that
technological progess is not in their best interests. ;-P

Now, on the experiment, if you really want an AI cadaver to look at,
this might work, but we have to kill the human upon vis exit from the
black box. Otherwise, ve may one day, even if we lock him up in a
jail and try to keep him away from computers, get access and program
an unFriendly AI. I don't think you'll find many people willing to
die for such a useless cause as being able to see what an unFriendly
AI looks like.

Wait, I'll respond to your reply right now: but it's not useless
blah blah. Yes, it is. There are a couple of reasons. One is that
it won't help a Friendly AI learn anything. If our FAI can't handle
an UFAI when ve runs into one. The other is that we can't look at
the code, because the UFAI might learn vis purpose to us and use vis
code to convert us into UFAI makers.

-- 
Gordon Worley
http://www.rbisland.cx/
mailto:redbird@rbisland.cx
PGP Fingerprint:  C462 FA84 B811 3501 9010  20D2 6EF3 77F7 BBD3 B003


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT