Re: [sl4] Simple friendliness: plan B for AI

From: Luke Griffiths (wlgriffiths@gmail.com)
Date: Sun Nov 14 2010 - 21:26:58 MST


Firendly AI needs to be more powerful than unfriendly AI. AI should be
friendly for the same reasons humans are friendly: it is an
evolutionary advantage.

Friendliness is either (a) an inherent output of proper perception or
(b) a useless pipe dream. All systems of control are hackable. All
intelligent systems respond to incentives. If being an asshole is
advantageous we are doomed.

We should focus foremost on making machines intelligent enough to
recognize the inherent advantages of friendliness. For instance, note
that we have not yet experienced nuclear war. Nations figured that out
pretty quickly. All conflict is costly. Study hippies if you want to
make friendly AI. Hippies somehow raise wonderful, loving children
without putting straight jackets on them. How?

Study zen to understand that individuality is at best a mildly
convenient simplification. Many useful tasks can be performed without
a sense of I. If a machine is to be intelligent it must experience
pleasure. If a thing is intelligent and emotional it will have empathy
unless explicitly programmed not to.

Billions of years or evolution programmed us to reproduce because
biology could not give rise to modularity. Hence a cycle of birth,
death, and cyclical being, complete with new learning stages at each
generation were the method that solved it. A strong AI's incentives
and choices will resemble those of a species more than that of an
individual. What is death to a species? What is competition to a
species? Are friendly species successful?

Most of our programming is based on finding a mate, making sure that
mate doesn't mate with anyone else, and mating with that mate before
we die. A machine will have none of that to deal with. Think bigger.
Take some acid or something. Enhance your creativity and imagination
to tackle strong AI. We are becoming gods, yet we persist in thinking
like auto mechanics.

Give me a machine that knows when it's complying with law, and I'll
give you strong AI.

Sent from my iPhone

On Nov 14, 2010, at 9:08 PM, Tim Freeman <tim@fungible.com> wrote:

> From: Piaget Modeler <piagetmodeler@hotmail.com>
>> What do we do about Asimov's three laws where military AI is concerned?
>
> Ignore them. They were contrived to give Asimov interesting conflict
> he could write about, not to solve any real-world problems. This is
> discussed at http://www.asimovlaws.com/.
>
> Eliezer says not to generalize from fiction, and I agree. See
> http://www.imminst.org/forum/index.php?s=&act=ST&f=67&t=1097&st=0
> --
> Tim Freeman http://www.fungible.com tim@fungible.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT