Re: ethics

From: Aubrey de Grey (ag24@gen.cam.ac.uk)
Date: Mon May 24 2004 - 12:46:34 MDT


Eliezer Yudkowsky wrote:

> The idea is that even=20
> if there's a huge amount of computing power devoted to looking for
> actions/plans/designs that achieve U(x) > T, such that the specific
> solutions chosen may be beyond human intelligence, the *ends* to which
> the solutions operate are humanly comprehensible. We can say of the
> system that it steers the futures into outcomes that satisfice U(x),
> even if we can't say how.

That's not my difficulty - I have no trouble with the idea of software
that finds incomprehensibly complex ways to do something well-defined,
even in cases where all possible ways are incomprehensibly complex.
Finding a forced win from some chess positions would be an example.
What I can't see is how to define the FAI goal well enough.

> Actually you need a great deal more complex goal structure than this, to
> achieve a satisfactory outcome. In the extrapolated volition version of
> Friendly AI that I'm presently working with, U(x) is constructed in a
> complex way from existing humans, and may change if the humans
> themselves change. Even the definition of how volition is extrapolated
> may change, if that's what we want.

Right, that's just the sort of thing I was thinking of. How can we define
U(x) in terms of existing humans without a formalisation of existing humans?
How is "what humans want" defined? (Others have said the same -- I don't
mean to improve on their challenges, only to say I share their concerns.)

> > Now, I accept readily that it is not correct that complex systems are
> > *always* effectively incomprehensible to less complex systems. I
> > have no probelm with the idea that "self-centredness" may be
> > avoidable. But as I understand it you are focusing on the
> > development of a system with the capacity for essentially indefinite
> > cognitive self-enhancement. I can't see how a system so open-ended
> > as that can be constrained in the way you so cogently point out is
> > necessary, and I also can't see how any system *without* the capacity
> > for essentially indefinite cognitive self-enhancement will be any use
> > in pre-empting the development of one that does have that capacity,
> > which as I understand it is one of your primary motivations for
> > creating FAI in the first place.
>
> The problem word is "constrain".

Hm, no, I think you interpret me to have meant "restrain", but I meant
only the choice of U(x). Again, I shar ethe concerns expressed by
others -- I can't see how a U(x) can exist that ensures that the FAI
will not do certain classes of things that we haven't thought of
hard-wiring into not-U(x) but that we nonetheless would want it not to
do if we had thought of them, without making it fundamentally not very
versatile/powerful at all.

> I would construct a fully reflective optimization process capable of
> indefinitely self-enhancing its capability to roughly satisfice our
> collective volition, to the exactly optimal degree of roughness we would
> prefer. Balancing between the urgency of our needs; and our will to
> learn self-reliance, make our own destinies, choose our work and do it
> ourselves.

This seems very intrinsically to require the FAI to consult us a great
deal and to have a way of determining its actions based on that process
of consultation in a way that humanity as a whole finds acceptable. I
can't see how an FAI can be expected to do that if even humans in key
policy-making roles can't do it. Why would we be happier with getting
a machine to do these things than a government? Would we need a range
of FAIs with different U(x) that we could periodically choose between
as a society, like political parties?

> > (In contrast, I would like to see
> > machines autonomous enough to free humans from the need to engage in
> > menial tasks like manufacturing and mining, but not anything beyond
> > that -- though I'm open to persuasion as I said.)
>
> Because you fear for your safety, or because you would prefer to
> optimize your own destiny rather than becoming a pawn to your own
> volition? Or both?

Almost entirely the former, because things can go terminally wrong very
quickly indeed with any such system that I can envisage. The latter,
only to the extent that it develops into the former -- I don't object
to machines that really truly always do as I want and always will.

> > What surprises me most here is the apparently widespread presence of
> > this concern in the community subscribed to this list -- the reasons
> > for my difficulty in seeing how FAI can even in principle be created
> > have been rehearsed by others and I have nothing to add at this
> > point. It seems that I am one of many who feel that this should be
> > SIAI FAQ number 1. Have you addressed it in detail online anywhere?
>
> Not really. I think that, given the difficulty of these problems, I
> cannot simultaneously solve them and explain them.

Whoo - that's a very unusual view in science! Most scientists seem
to find that explaining one's current thinking about a hard problem
is far and away the most effective way to refine that thinking, even
over and above the possibility that one's interlocutor may have some
useful feedback. I don't think this view is any less common among the
most stellar sciensts than mediocre ones, either. But we all have our
individual ways of working, so I don't mean this as a criticism.

> > I'm also fairly sure that SIAI FAQ #2 or thereabouts should be the
> > one I asked earlier and no one has yet answered: namely, how about
> > treating AI in general as a WMD, something to educate people not to
> > think they can build safely and to entice people not to want to
> > build?
>
> I've had no luck at this. It needs attempting, but not by me. It has
> to be someone fairly reputable within the AI community, or at least
> some young hotshot with a PhD willing to permanently sacrifice his/her
> academic reputation for the sake of futilely trying to warn the human
> species. And s/he needs an actual technical knowledge of the issues,
> which makes it difficult.

I don't really see what you mean. Surely the only people who need to
be educated/enticed in this way are those with the capacity to have a
go at building full-blown AI? Or to build it by accident, I guess --
but even then I can't see those people being hard to educate on this.

Aubrey de Grey



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT