Re: [SL4] brainstorm: a new vision for uploading

From: Nick Hay (nickjhay@hotmail.com)
Date: Thu Aug 14 2003 - 17:31:20 MDT


king-yin yan wrote:
> I don't have a clear understanding of the big picture yet, but I think I've
> spotted a mistake here: "Robust" Friendliness requires non-anthropomorphic,
> mathematical / logical precision. Anything less than that would be risky.

Formal mathematial systems are one tool humans use to solve problems. I don't
think it's well suited to the task of describing and transferring human moral
structure and content to an AI. I don't think this process can be well
described mathematically.

*** UPDATE, . formal systems humans create directly via axioms

> However, Friendly to whom? We seem to have already excluded other
> primates from consideration, not to mention animals. Even if that is OK,
> the defintion of Friendliness will become more problematic when uploading
> becomes available. Are uploads humans? Do copies have separate votes or
> 1 vote? I'm not sure how a formal system of Friendliness is capable of
> dealing with such questions.

"Friendliness" is very different to "friendliness". A Friendly AI is one that
shares the moral complexity we all share - the adapatations we use to argue
and think about morality as we are now. The Friendly AI doesn't quite share
all human moral complexity, not all parts are desirable (eg. selfish aspects
of morality), but humane moral complexity the kind of morality structure (and
content) we'd want to have.

So primates, animals are certainly not clearly ruled out - plenty of human
moralities judge that to be a bad thing, all other things equal. The aim is
not to answer these questions at the start, which is a far more political
solution, but to give the FAI itself the ability to think about the issues
and answer these questions as it gets smarter and more humane.

Friendliness isn't a formal system, certainly not in the moral law sense -
they're far too fragile. Typically manipulating or adding axioms vastly
change the system. Formal systems in general lack the flexibity and structure
of the human thoughts that create them. We don't want to transfer the moral
codes of law that humans can create, but the ability to create those code in
the first place. The programmers don't decide what is right and wrong.

For an example of the kind of theory of Friendliness I mean, albeit an
incomplete and outdated one, have a look at http://intelligence.org/CFAI . It
describes various useful structures: those needed to understand the idea of
approximating a goal (External Reference Semantics), of representing and
acquiring the forces underneath human morality (Shaper/Anchor Semantics), and
generally examining all causes behind its creation (Causal Validity
Semantics). These are the AI equivalent of certain kinds of moral structure
we all share (all being members of a single species).

> The second problem is that Friendliness will be designed by a group of
> *human* programmers, not by an AI. If we then concentrate all our
> computational resources to the FAI, then Friendliness will effectively
> become some sort of universal political solution. In other words:
> A group of human programmers will have to design a perfect political
> system. That sounds very unrealistic...

The programmers don't design the solution 1) they aren't smart/rational/humane
enough 2) it isn't programmer indepedent. A FAI written by the Marquis de
Sade should not different from one written by a Buddist monk, or any other
human, because FAI's are designed to be not sensitive to the particular moral
conclusions or opinions that differnet between potential programmers, but the
things they (and non-programmers alike) all share.

> The question is why put ourselves in a vulnerable position, sub-
> -ordinating to a superintelligence which doesn't even exist now.

You should really read CFAI: Beyond Anthropomorphism :) Our position right
now, with a whole bunch of near-equals getting more and more powerful
weapons, is vulnerable. Indeed every existential risk makes us vulnerable,
that's 1/2 the point of a Singularity in the first place. We can't eliminate
all risks, or remove all vulnerablity, but decrease it.

FAI orginated superintelligences aren't like a tribal leaders, or tribal
councils, or governments, or any other [human] structure which is
superordinate to other sentients. The SI doesn't have, nor does it want,
political control as humans do. It wants sufficent control to ensure bullets
simply don't hit anyone who doesn't want to be shot, for instance, but it
doesn't want sufficent control to ensure everyone "agrees with it", for
instance. Anthropomorphisms, that is almost any comparison between AIs and
humans, don't help understanding.

> Personally I think the most appealing solution is to let people augment
> themselves rather than create autonomous intelligent entities. But we
> don't have a direct neural interface to connect our brains to computers.

Personally I think that's one of the least appealing solutions. Humans are
autonomous intelligence entities with reams of known flaws. Fears about an
entity, or group of humans, rising among the rest and subordinating them are
far more founded than those about AIs because, historically speaking, that's
what humans *do*. Often they proclaim they're doing the best for everyone,
and often they'll believe it, but rationalisation distorts actions in a
self-biased manner. Unless there's some way to augment everyone at the same
rate, and in fact even then, it doesn't look good.

Part of the appeal of the Friendly AI approach is starting from a blank slate.
Making a mind focussed about rationality and altruism, not politics.

However, there is a matter of time here. think it's far easier to spark a
superintelligence from an AI than from a human brain, in the sense that I
imagine it'll be possible to do the former first. So attempts at solely
augmenting humans will be too late, since I can't see everyone stopping their
AI projects. However things would be very different if the human augmentation
route to superintelligence was significantly faster than the AI route.

(for further details here, see http://intelligence.org/intro/whyAI.html)

Mind you, various human augmentations could certainly help things - perhaps a
little device that alerted humans when they're rationalising. Or something
that increased the level of mental energy without compromising the ability to
think properly. But augmenting or uploading humans, as the sole route,
doesn't seem either desirable or practical.

> Unless we get uploaded otherwise we'll have to rely on LANGUAGE to
> communicate with computers. This *linguistic bottleneck* is the hardest
> problem I think.

We'll have to rely on thoughts, and the things they do. Using human language
to directly communicate with an AI is more of a final step - the AI has to be
quite mature to understand human language directly, I suspect. But there are
other ways to communicate, or more generally transfer information to the AI.
For instance, posing simple problems for the AI to solve.

> PS Thanks for everyone else's reply...

Thanks for your questions.

- Nick



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT