Re: guaranteeing friendliness

From: Peter de Blanc (peter.deblanc@verizon.net)
Date: Wed Nov 30 2005 - 22:15:34 MST


On Tue, 2005-11-29 at 20:13 -0500, Richard Loosemore wrote:

> I repeat: why is extreme smartness capable of extreme persuasion?

At the risk of being redundant:

When we ask if an extremely intelligent being can be extremely
persuasive to humans, we are not wondering about the nature of
intelligence, but about the nature of human beings. The real question
is: "how exloitable are human beings?" If human beings can in principle
be manipulated to perform a certain action, then sufficiently powerful
inference would be able to figure out how.

So how exploitable are we? As a lower bound, we can say that human
beings can in principle be convinced to do anything that one human being
has convinced another human being to do.

This lower bound can't be anywhere near what can be done in principle,
though, because the effort that has gone into solving the problem of
exploiting humans, while gargantuan, is nowhere near appropriate to the
size of the problem. Understanding how to optimally exploit humans is on
the order of the difficulty of fully understanding human cognition.

So far, no serious general intelligence has been used to try to solve
this problem. Human general intelligence is very small compared to all
of human cognition, which is what we are attempting to analyze when we
try to exploit other humans. So when manipulating people, we have to use
specialized tools which we evolved for this purpose.

I strongly suspect that, when trying to predict the actions of others,
we look to our own decision mechanisms and extrapolate that others would
make analogous decisions. This would be more likely to fail when dealing
with a human who is very different from yourself, for example someone
less intelligent. So the most generally intelligent people may not be
the best equipped to manipulate normal human beings (so I'm predicting
that, for someone on this list, an average man off the street would be a
more difficult gatekeeper in an AI-Box experiment than an average
college graduate).

So I'm saying that more intelligence should not help a human to
manipulate other humans as much as it would help with other tasks.

On the other hand, for a mind smart enough to socialize with humans
_using general intelligence_, a little more intelligence should go a
long way.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT