Re: [SL4] brainstorm: a new vision for uploading

From: Nick Hay (nickjhay@hotmail.com)
Date: Fri Aug 15 2003 - 22:20:04 MDT


king-yin yan wrote:
> There is a dilemma in here, on the one hand a formal system (made of
> simplistic rules and thus mathematically analysable) will be predictable
> and safe, but it can't handle the moral complexities that we would want.

Right, it'll be predicable. It won't necessarily be safe - a lot of those
moral complexities are necessary for safety.

> On the other hand, the complex moral structure that you described
> above will require a connectionist approach or something equivalent.
> Meaning that it has distributed representations, graded response,
> generalization, and being able to be *trained*.

Connectionism, in the sense of building everything out of neurologically
inspired networks, has at least two problems. Firstly it is substrate that
may not be best suited for the kind of computational hardware we have - fast,
digital, serial. It seems more suited to implementing a mind on meat - cells
with a slow rate of computation, which need to be massively parallelised to
get anything done. Secondly it conflates the levels of organisation - you
don't introduce all information on the lowest level, code, but on various
different level built on top of code. You don't design things solely on the
atomic level.

See http://www.intelligence.org/seedAI/ for a more sensible AI design.

Although I'm not quite sure what you mean by "connectionist approach or
something equivalent". Can you elaborate? What AI methods is it contrasted
with?

> Then you have a big
> problem. Practically such a connectionist network is quite similar to a
> human being, but much smarter.

Similar to humans in what way? Humans, like all species, contain a lot of
specific complexity. A lot of this is important complexity that needs to be
explictly introduced (eg. the structure beneath humane morality).

Why will connectionist networks be smarter?

>Every human would end up trying to
> talk to this AI like crazy in order to influence its behavior in their
> favor...

This is unnecessary in a Friendly AI, in the sense that it won't make any
difference (this is a good thing!) - the FAI's final state should not depend
on who programmed it, or who talked to the AI in ver youth, except in so much
as deciding the binary issue "Friendly or unFriendly". This independence is a
desirable goal towards which the FAI can direct its intelligence. Ensuring
the FAI remains Friendly is a complex and important design consideration.

> I can understand why you're alarmed by intelligence augmentation, what
> you say is basically: "Computational power is dangerous, let's concentrate
> all the power in one AI and let it rule". But you seemed to downplay the
> fact that 1) the Friendliness system is designed by human programmers;
> 2) it needs to be trained by humans. I'm afraid a lot of people will be
> skeptical about this.

There is no distinction between one AI and many - unless the many have
divergent goal systems ie. they're not all Friendly. If "let the AI rule"
uses "rule" in the same sense that "physics rules humans" then sure. If
you're imagining a benevolent dictator who exerts social control, then no.

Actually, I don't think that phrase by itself is a good summary of what I'm
trying to say. And I can't think of a single sentence that'd accurately
describe it :)

Of course the number of skeptical people is often independent of the truth of
a given statement, but you don't mean that. A Friendly AI is designed by
human programmers, its *initial* training is specified by human programmers.
This is true for all AIs. A Friendly AI is specifically designed to not be
sensitive to the differences between humans or to its particular programmers,
not sensitive to the various classes of mistakes the programmers can make,
etc. This is not an issue that is downplayed, although perhaps I have in my
posts, but one that is explicitly recognised. Most of CFAI describes
particular structures needed for a solution of this problem.

> >FAI orginated superintelligences aren't like a tribal leaders, or tribal
> >councils, or governments, or any other [human] structure which is
> >superordinate to other sentients. The SI doesn't have, nor does it want,
> >political control as humans do. It wants sufficent control to ensure
> > bullets simply don't hit anyone who doesn't want to be shot, for
> > instance, but it doesn't want sufficent control to ensure everyone
> > "agrees with it", for instance. Anthropomorphisms, that is almost any
> > comparison between AIs and humans, don't help understanding.
>
> That sounds like a universal political solution. The FAI will decide
> whether wars should be fought or not, who are criminals and deserve what
> kinds of punishment, etc.

Fight wars? Punish criminals? *Have* criminals? Why is any of this necessary?

"universal political solution"?

> >Personally I think that's one of the least appealing solutions. Humans are
> >autonomous intelligence entities with reams of known flaws. Fears about an
> >entity, or group of humans, rising among the rest and subordinating them
> > are far more founded than those about AIs because, historically speaking,
> > that's what humans *do*. Often they proclaim they're doing the best for
> > everyone, and often they'll believe it, but rationalisation distorts
> > actions in a self-biased manner. Unless there's some way to augment
> > everyone at the same rate, and in fact even then, it doesn't look good.
>
> What you're depicting here is dangerously close to dictatorship. On the
> other hand, free augmentation is actually not that bad.

Do you think I'm suggesting we supress human augmentation technologies? Is
that what you mean by "dicatorship"? If so, I wasn't clear: I was arguing
that *accelerating* human augmentation isn't the best use of our efforts and
that accelerating *Friendly* AI is, at least at present, a far better
investment. Both because Friendly AIs are safer and more desirable that your
typical human augment or upload, and because Friendly AIs of a given
intelligence should be easier to get and thus exist earlier. This is
important because that means unFriendly AIs (ie. any AI that isn't Friendly)
can come before human augments/upload with enough intelligence to protect
against them.

> Just because
> humans are free to augment their intelligence does not mean that they
> will start using that intelligence to harm others. Most likely a kind of
> morality will emerge in the population so no one will have an absolute
> advantage over others.

You're right, people won't start using their intelligence to harm others,
deliberately or not. The risk is they'll continue as they have throughout
human history.

Of course I think humans should be free to augment their intelligence, I don't
think we shouldn't supress people trying to augment humans. However I don't
think it's a practical or desirable route to the Singularity - a human mind
isn't the best seed for a superintelligence, nor is a human brain an easy to
expand substrate (compared to that of an AI). And all those other reasons
I've mentioned elsewhere.

> >Part of the appeal of the Friendly AI approach is starting from a blank
> > slate. Making a mind focussed about rationality and altruism, not
> > politics.
>
> It's much more complicated than that, if you look closer...

Some things are suprisingly simple. What complexities am I ignoring?

> >However, there is a matter of time here. think it's far easier to spark a
> >superintelligence from an AI than from a human brain, in the sense that I
> >imagine it'll be possible to do the former first. So attempts at solely
> >augmenting humans will be too late, since I can't see everyone stopping
> > their AI projects. However things would be very different if the human
> > augmentation route to superintelligence was significantly faster than the
> > AI route.
>
> There's an even more important question: Whether the AI can really be
> controlled by its own designer.

"Control"? If you mean control in the sense of "the AI obeys our commands"
kind of control, then this is not what we want. This is termed the
"adversarial attitude" and is not a workable solution to Friendly AI.

If you mean control in the sense of "best ensure the FAI remains
humane/Friendly" then this question has complex answer. For many AI designs
this is not possible. Friendly AI is an effort to explore this possibility -
it should be possible to have at least as much "control" (in this sense) over
FAIs as over any group of humans, although you can't simply assume that's
true. This is a complex matter than cannot be simply decided. CFAI (of
course) goes into more details.

> On the one hand you want the AI to have
> common sense. That requires a connectionist appraoch (or something
> similar).

Why is something similar to a connectionist approach necessary to implement
and transfer common sense?

> Once you have connectionism then the AI is pretty much
> autonomous.. Then it is somewhat like a human child. That would be like
> all humanity having only *1* kid and giving him/her all the power.

Not really, it's more like having only *1* set of physical laws. There's no
reason to consider and FAI more like a single human child than an entire
human civilisation. It's like neither, but the important point is that you
can't use your intuitions about humans abusing power to judge the likelihood
of an FAI abusing power - your intuitions transparently assume too much. When
you reason about minds in this manner you are using adaptations specialised
for dealing with humans - they were the only class of mind around in your
evolutionary history. As such they assume specific things about minds that
don't hold in general since they didn't need to work for minds-in-general and
could specialise (or rather, were specialised from the start) eg. single
minds are most likely to abuse power than multiple minds, or indeed that
minds-in-general are likely to abuse power at all.

> Now why are you so sure that a connectionist system will behave as
> you want it, given all its complex characteristics?

I don't, because I don't suggest we use anything like a connectionist system.
I suggest we use a far more complex, and well-specified, solution.
Friendliness is specifically designed to work in this kind of situation, it's
not designed to need a human safety net.

We can't guarentee it'll work, but we can compare the likelihood of it working
to other navigational scenarios. Why do you think a society of augmented
humans will behave as you want, given all its complex characteristics? Why do
you think it's more likely to behave as you want than a Friendly AI?

> >(for further details here, see http://intelligence.org/intro/whyAI.html)
>
> Thanks, I've read that, and I've browsed through CFAI briefly.

CFAI is one of those documents you don't understand even if you read it
closely. I personally found I had to read it multiple times and I still don't
think I understand it all. It doesn't appear that you understand it,
otherwise you wouldn't be casually mentioning Friendly AIs "taking over" or
the need for humans to "control the AI" etc without justifaction (ie.
anthropomorphisms specifically dealt with in CFAI). It really is well worth
the effort.

> The problem is AI's are likely to take over rather than care about us.
> Unless we figure out a way to control them. If we do, then it is a kind
> of augmentation (external rather than implanted).

This is anthropomorphic. If you create an unFriendly AI it uses us as
resources, it doesn't rule us like a human would. An unFriendly AI has no
reason to treat us differently from any other arrangement of matter, expect
in so much as our behaviour may be specifically tuned to counteract its
goals. Perhaps that's what you meant by "take over".

What's this distinction you see between AIs and humans? Why can't an AI be as
moral or more moral than a human? Humans have specific adapatations for
taking over the tribe, for abusing power when it suits them, etc. A FAI will
not. Why will augmented humans care about us rather than take over?

> Augmenting/uploading is not necessarily undesirable.

You're right: it's not necessarily undesirable in itself. As a route to the
Singularity, explictly contrasted with the Friendly AI route, I supect it is.
There is the added difficulty of AI probably being easier than augmentation,
and far more probably easier than uploading. "easier" essentially meaning
"will come first".

> Sure, some people
> will end up more intelligent than others. But that's just the way human
> diversity is always like. No one is likely to attain absolute power, so I
> think that's fine.

Why isn't anyone likely to attain power? What probability would you attach to
an upload/augment turning into a world dictator, or some such thing, as
humans often do given enough power? How about a large group taking power? Or
simply destroying everything with nanoweapons? etc.

> Question: How can you have an AI understand you, without letting it be an
> autonomous entity? On the one hand we want a tool, on the other hand
> we want to make sure it will not become the master. And actually the crux
> of the problem comes from the linguistic bottleneck. Imagine if we have
> direct neural interfaces on the back of our necks, then we'll all be busy
> playing with add-on modules now, with magazines advertising all sorts of
> gadgets, like body-building etc.

You and the AI don't have to be separate minds, so the dichotomy of autonomous
vs. tool is false. But I won't go into that here. We don't want a tool, as I
think I illustrated in a previous post, and we don't want a master. Howver an
AI is extremely unlikely to want to become our master, unless you plan to
build in a "human-like social dominance" module, which would be an incredibly
stupid move. Social dominance is an undesirable human trait which an AI will
lack without it being explicitly introduced.

It'd be nice to have direct neural interfaces, and to the extent they're
developed and are useful they'll be used. But, especially since it appears AI
will be feasible before significant neural interfaces, it's not a good idea
to tie ourselves to this.

What problems do you see with non-neurologial methods of information transfer?
How will neural interfaces solve this problem, and what kind of neural
interface is necessary? Perhaps the problems can be solved in alternate
manners? For instance, one can notice that linguistically we have problems
describing unambiguous external referents, problems pinning down the meaning.
We can study this seperately, to find more feasible solutions.

Note I'm speaking specifically about Friendly AIs and not an arbitary AI. CFAI
describes one particular FAI design, or rather a class, and the kind of
structures that are necessary. Notice the details (eg. external reference
semantics, shaper/anchor semantics) are much more specific than "a
connectionist AI" or "training an X" and it's this specificity that allows
particular statements not warrented for AIs in general, or minds-in-general,
to be made. (minds-in-general refer to the class of all possible minds,
humans, AIs, and humane superintelligence being particular sub-classes of
this space).

I think our disagreements are caused by disagreement on more fundamental
premises eg. what AIs can be, what Friendly AIs are. In particular, here is a
pair of contrasting views, the former is my approximation to your view, the
latter to mine:

* Friendly AIs can be no more moral than humans (or aren't likely to be), or
groups there of, and quite likely far less. Human control, or supervision, of
(mature) FAIs would increase their safety.

* Friendly AIs can be far more moral than humans. Humans weren't designed to
be altruistic, rational, or good. They were designed to be selfish (in the
sense of increasing *their* inclusive fitness) in a hunter-gatherer
lifestyle, it's largely by accident that we can be altruistic at all (except
in the limited sense of reciprocal altruism - selfish trade). Augmented
humans can change this, but a human mind is not suited to major revision
unlike that of a Seed AI, and there's an additional risk that the selfishness
won't be successfully removed (note that I'm not speaking just about
deliberative desires to be selfish, but all the aspects of the mind that lead
to selfish behaviour - rationalisation, etc)

(this is not very complete, and possibly not very accurate, but it's a start)

These issues are discussed at length in CFAI and LOGI. One can only fit a
small amount of detail into an email. In particular, you might like to read
these sections first:

CFAI:
* 1: Challenges of Friendly AI, 1.1: Envisioning perfection
* Appendix A.1: Indexed FAQ
* 2: Beyond anthropomorphism; Interlude: Beyond the adversarial attitude

LOGI:
* 3.1: Advantages of minds-in-general

As you might guess, these issues have been discussed frequently and more
throughly in the past. In addition to the above documents, and the singinst
site, there's the SL4 achive. However I'm not sure how interested you are,
and how much reading you're willing to do :)

- Nick



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT