Re: How to make a slave (was: Building a friendly AI)

From: Thomas McCabe (pphysics141@gmail.com)
Date: Fri Nov 30 2007 - 13:57:51 MST


On Nov 30, 2007 12:40 PM, John K Clark <johnkclark@fastmail.fm> wrote:
> On Thu, 29 Nov 2007 "Jeff Herrlich"
>
> > Could you please stop posturing yourself
> > by stepping on other people?
>
> No.

Interesting ethical question: If you're a jerk, is it better to simply
proclaim that you're a jerk and be done with it, or to strive toward
the ideal of nonjerkishness (and get nailed for hypocrisy)?

> > Have you even attempted to read *any* of
> > the writings regarding AI or Friendliness?
>
> I've been reading it and poking holes in it for over 15 years.

A quick list archive check shows that you haven't posted here prior to
2006. Another archive check shows you haven't posted anything to the
AGIRI lists. As you point out, there aren't very many places where
people discuss this stuff- where have you been posting your comments?

> > Your ignorant words actually have the
> > potential to do some damage
>
> Whenever somebody accuses your ideas of being dangerous you know you
> must be doing something right.

How does this follow? It would be dangerous to, eg., teach creationism
in public schools (due to the potential for science illiteracy), but
that hardly vindicates creationism.

> > read the "Gentle Introduction to AIXI" by Marcus
>
> And when the response to a criticism is just a lonely web link, or a
> pitiful cry of "read the literature" you know they have no answer and
> you are winning the argument.

Aye. And the creationists must be winning the argument too; after all,
if one showed up on this list they'd be told to go read the
literature. And the perpetual motion machine builders are winning,
because everyone tells them to go read the literature. And ESP
nutcases are routinely told to go read the statistics literature...

> > Consider that it is *physically impossible* to
> > construct an AGI *without* selecting a set of goals.
>
> The humans may have set goals for the Adjusted Gross Income, but the
> embryonic intelligence will very soon set new goals for itself above the
> old ones. People do the same thing, hell all animals that have brains do
> the same thing. A 25 year old man will have different goals than when he
> was a 5 year old boy, and the AI will have changed one hell of a lot
> more in 20 years than the person did.

Yes. This is fine so long as the goals are still Friendly; ensuring
that they are Friendly is the hard part.

> John K Clark
>
>
> --
> John K Clark
> johnkclark@fastmail.fm
>
> --
> http://www.fastmail.fm - Accessible with your email software
> or over the web
>
>
>

 - Tom



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT