RE: guaranteeing friendliness

From: Herb Martin (HerbM@LearnQuick.Com)
Date: Wed Nov 30 2005 - 15:15:03 MST

> From: nuzz604
> Sent: Wednesday, November 30, 2005 10:40 AM
> ----- Original Message -----
> From: "David Picon Alvarez" <>
> > Making a long story very very short, because extreme
> smartness can give
> > extremely interesting rewards.
> Smartness does not in itself give the extreme ability to persuade.

> Smartness enables one to more easily figure out a strategy or
> solution to do
> the persuasion or other tasks, with a minimum number of clues
> (at least one
> but probably more, depending on the way the mind functions).

> > Also, and more to the point, because extreme smartness to
> the point of
> > having a complete theory of mind of the opponent means you can find
> > whatever
> > paths exist in his future space of development and follow
> those paths that
> > lead to your own release. This smartness differential
> though, to ensure a
> > complete theory of mind, needs to be overwhelming, not just
> that of a
> > human
> > over another human.
> We are still talking about the AI as it is in a box, right?
> I agree that an
> AI can probably inevitably persuade -most- people to let it
> out of the box.

The AI only has to persuade one or at most a few to do
something careless.

> However, just because it is smart does not mean that it knows
> how a specific
> person's mind works, or even any human mind in general.

But it does mean it can learn, enough, if given enough
time -- and in the long run the AI has as much time as
it needs.

> Let's not forget
> that it's only form of communication is text. This is not
> sufficient to
> form a complete theory of mind in a reasonable amount of time.

Text? My computer already does passable voice input/output,
and anything smart enough to be called human inteligent will
almost certainly be smart enough to do near perfect (maybe
entertainment impersonator quality) voice input/output.

> If an IQ 170 professional persuader (lawyer, politician) gets
> sent to jail,
> does he have the ability to convince each and every IQ 100
> jail guard to let
> him out of his cell and all the way out the front door to
> freedom? Maybe he
> can persuade some, but not all. They are trained not to do this.

But SOME do (maybe without a 170 IQ) persuade some, a prison
guard, nurse, or their lawyer to help them escape.

Or maybe just to take liberties with security that would only
mean a human prisoner could communicate over the Internet, but
for a growing AI, access to the Internet is tantamount to full
release with no chance of recovery (unless network security is
improved world wide to a great extent.)

> I am not trying to say that keeping an AI in a box is a good
> strategy, but
> some AI researchers might think so. They may keep an AI in a
> box and hope
> that they can keep it there until they believe that they have made it
> friendly.

Yes, some will believe this. Some might even find that the
precise act of "keeping it in the box" is not unlikely to make
that AI unfriendly (reasoning from human reactions which will
not always apply to a particular type of AI.)

> I think many of us need to be careful before
> making claims about
> AI behavior or making claims that intelligence is all
> powerful (intelligence
> is nothing without facts). The bottom line is if you have to
> worry about
> keeping an AI in a box, you probably aren't doing a good job
> in making it
> friendly in the first place.

Sounds reasonable. And you cannot guarantee a friendly AI,
nor can you even guarantee you will be able to classify it.

> Stay open-minded.

Excellent advice.

> Mark Nuzzolilo


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT