Re: Destruction of All Humanity

From: Jef Allbright (jef@jefallbright.net)
Date: Tue Dec 13 2005 - 01:09:52 MST


On 12/12/05, micah glasser <micahglasser@gmail.com> wrote:
> You seem to be indicating that an AI goal system should include the
> governence of human beings. I think that this is a terrible mistake (please
> disregard if I have misinterpreted). In my opinion the goal system problem
> has already been solved by philosophical ethics.

There are many competing theories of philosophical ethics. Most of
them are loaded with 18th and 19th century premises and assumptions,
and most exhibit an intrinsic duality between man and nature that
appears almost childlike in its naivete when seen from a traditional
eastern philosophical perspective, or from the perspective of modern
evolutionary and cognitive science together with a developing
appreciation of dynamical systems.

> The goal is the greatest amount of freedom for the most people.

You might consider that, rather than maximizing freedom, a balance of
freedoms with responsibilities is necessary at each step of the way to
achieving growth toward the kind of world that best represents the
shared values of humanity. While such growth does tend to lead toward
increasing freedoms, in no sense is this a primary goal.

> This implies, I think, that an AI
> should be directed by the categorical imperetive just as humans are.

While the categorical imperative is one theory of how humans *should*
direct their moral decision-making, few people would agree that this
is how humans *are* directed.

Kant's categorical imperative has some serious weaknesses, visible
even in its day. For example, the idealist notion of maximizing
rather than satisficing. Or the idealist notion of an imperative that
is truly categorical in its application, including, for example,
telling the truth to a crazed gunman asking where your children are
hiding. Yes, Kant actually defended this absolute deontological point
of view.

? The
> only way to ensure that an AI will be able to succsesfully use this logic is
> if its own highest goal is freedom. This is because the categorical
> imperetive restricts actions that one would not will to be permissible in
> general. The categorical imperative also restricts treating any rational
> agent as only a means to an end - in other words as a tool. Therefor,
> according to this ethical system, we must treat any AI life forms as people
> with all the rights of people and demand that they treat other rational
> agents the same way.

Another example of naive idealism, thinking that humans are rational
agents, or that all such rational agents are somehow equal (by what
measure?) or should be categorically treated *as if* they were. Some
of these idealistic principles served well in their particular time
and place, as a counter to the oppressions of the time, but that does
not mean they are universally true in an evolving and expanding
context.

> This is a simple solution to an otherwise very
> complicated problem. Its a fairly simple logic that can easily be proggramed
> into an AI.

There is no simple solution for survival and growth within a
coevolutionary environment, whether you conceive it to be "maximizing
for individual freedom" or "promoting the growth of shared human
values." I think society will come to agree that there are relatively
simple principles for developing approximate solutions at each step of
the way, principles such as the scientific method for increasingly
objective, instrumental knowledge to implement our choices, and
principles of synergy, creativity and cooperation that will guide our
choices toward growth in terms of our values, but the ultimate
*solution* will always be ahead of us, even its definition evolving.

- Jef
http://www.jefallbright.net



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT