Re: AI Options.

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Jul 11 2002 - 00:50:51 MDT


Mike & Donna Deering wrote:
> Lets look at the options for AI:
>
> 1. We never develop AI, or so far into the future that it doesn't
> matter to any of us alive today, unless we are signed up for
> cryonics.

You can eliminate this option. A million years in the future is not so
far ahead that it "doesn't matter" to me.

> 2. We develop AI of a non-sentient tool nature and implement the
> solutions using human level institutions, in other words we screw
> everything up.

Equivalent to nondevelopment of AI. Prune this branch too.

> 3. We develop conscious sentient UAI, game over.

This should be option 1.

> 4. We develop conscious sentient FAI that thinks it shouldn't
> interfere with anyone's volition and we destroy ourselves while it
> stands by and advises us that this is not the most logical course of
> action.

I can't see this happening under any self-consistent interpretation of
morality, and would tend to prune this branch entirely along with
similarly anthropomorphic branches like "We develop FAI that decides it
wants animal sacrifices as proof of our loyalty." Maybe some people
want to destroy themselves. I want off Earth before that happens.

> 5. We develop conscious sentient FAI that thinks it should keep us
> from destroying ourselves or each other. This involves preserving
> each person's volition up to the point where it would be a threat to
> ourselves or someone else. This involves maintaining the capability
> to control the volition of every being on the planet. This involves
> taking over the world. This involves maintaining a comfortable lead
> in intelligence over every other being on the planet.
(*)
> This involves limiting the intelligence advancement of all of us.
> Understandably, a limit that is suffiently far away is not of much
> practical effect, but we are still left in the philosophical position
> compared with the AI, of pets. In the future of non-biological
> intelligence, biologically derived and protected entities are little
> more than pets.

I've marked with a (*) above the point where this extended chain of
reasoning breaks down. If you assume that 1% of all computing resources
are allocated to the underlying intelligence of the substrate, and that
no other one entity has more than one millionth of all the wealth in the
universe, there's no need to limit the intelligence advancement of
others. It is also a fallacious assumption that an intelligent
substrate must necessarily remain more intelligent than all other
parties; this could be the case but it could also be the case that an
absolute threshold of intelligence combined with the asymmetry in
physical access is sufficient to defend against any and all attacks.

Building a real tree of all possible options would be very tough work,
and nobody has constructed one that I agree with as yet, but Mechanus
(on Anissimov's forum) has gotten a lot farther than this:

http://bjklein.com/sing/forum/topic.asp?TOPIC_ID=489

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT