RE: guaranteeing friendliness

From: Herb Martin (HerbM@LearnQuick.Com)
Date: Fri Dec 02 2005 - 20:50:15 MST


> > Much like believing you can keep terrorists from taking
> > down an airplane by taking away sewing scissors from
> > ordinary passengers.
>
> This is an astoundingly bad (attempt at) an analogy, to the
> point of being actively misleading. Aside from the attempt
> to import random political and emotional baggage, and the
> usual reasons why it's fairly futile to try and evaluate what
> wildly transhuman intelligences can and can't do, the task of
> preventing general intelligences with harmful goal systems
> self-improving to a dangerous level is nothing like an obscure
> physical security issue faced by some contemporary hominids.
>
> * Michael Wilson

I have to admit enjoying that my "astoundingly bad analogy"
resulted in your direct response in making the my point:

> ...it's fairly futile to try and evaluate what
> wildly transhuman intelligences can and can't do, the task of
> preventing general intelligences with harmful goal systems
> self-improving to a dangerous level is nothing like an obscure
> physical security issue faced by some contemporary hominids.

Exactly.

After the Singularity we have no real hope of predicting
friendliness -- or knowing what initial conditions will
necessarily favor such.

You can TRY (and should) to develop (or encourage the
development of) friendly AI, but such cannot be guaranteed.

Beyond the singularity conditions is unknowable territory
(thus the name), and preceding the Singularity are competing
groups of human beings with different goals and ideas of
friendliness.

--
Herb Martin


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT