RE: [sl4] Simple friendliness: plan B for AI

From: Piaget Modeler (piagetmodeler@hotmail.com)
Date: Sun Nov 14 2010 - 21:09:39 MST


There isn't just one notion of free-will morality. Obviously Gordon Worley has never been to prison.
Consequences must be defined for what society deems "anti-social" behavior, otherwise people will
run amok. We have devices such as religion, or the penal system to keep people in line. For nation
states that decide to redefine the notion of morality in their favor, we have wars. What will we use
for robots who determine that they can do as they please?

Eliezer Yudkowski's message is that we need to design friendly AI's, that will solve our potential
problem. This is optimistic thinking. He says, "It doesn’t make sense to ask whether “AIs” will be
friendly or hostile." Since we have the creative power, we can design them to be friendly.
Unfortunately this view neglects evolution. No matter how we design systems initially, an
evolutionary system may evolve (positively or negatively) well beyond the scope and expectations
of its creators.

In any reasonable risk analysis, one should look at best-case, worst-case, and average-case
scenarios, expecting the average case to occur. If today our average case is that governments
maintain armies and maintain scientists who devise more and more sophisticated weapons
including robots and AI systems for the purposes of annihilating their "enemies", the average
case projection then is to expect that this will continue into our future. The best case is that
we as a species learns to forego war before the Singularity. This is unlikely. The worst case
is that the AI we develop that becomes super intelligent is military AI.

My original question remains unanswered. If Asimov's three laws are insufficient, or are considered fictional and hence irrelevant for building / creating real robots and AI. Then
how do we address the question of military AI, even now as more sophisticated robots are
being deployed to the battlefield?

PM.

> Date: Sun, 14 Nov 2010 17:56:37 -0700
> To: sl4@sl4.org
> Subject: Re: [sl4] Simple friendliness: plan B for AI
> From: tim@fungible.com
>
> From: Piaget Modeler <piagetmodeler@hotmail.com>
> >What do we do about Asimov's three laws where military AI is concerned?
>
> Ignore them. They were contrived to give Asimov interesting conflict
> he could write about, not to solve any real-world problems. This is
> discussed at http://www.asimovlaws.com/.
>
> Eliezer says not to generalize from fiction, and I agree. See
> http://www.imminst.org/forum/index.php?s=&act=ST&f=67&t=1097&st=0 [sl4] Simple friendliness: plan B for AI
> --
> Tim Freeman http://www.fungible.com tim@fungible.com
                                               



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT