Re: ESSAY: Forward Moral Nihilism.

From: John K Clark (jonkc@att.net)
Date: Tue May 16 2006 - 10:09:30 MDT


<m.l.vere@durham.ac.uk>

> My aim is to ensure that an obedient AI is built first, and grows to a
> level where it can stop other AIs from being built before your
> unfettered AI is built.

So lets see, you want the AI to be astronomically brilliant but to obey and
be utterly devoted to something it can only consider ridiculously stupid,
you want the AI to behave benevolently toward humans but be genocidal with
transhumans, you want the AI to let humans be free but to tightly restrict
their research into computer science and nanotechnology, you want all these
things to never change and you want it to happen without the AI or the human
race stagnating. What you want you're not going to get.

> emotions would be a disadvantage in an obedient
> AI, so I for one wouldnt put them in.

Emotions are the organizing principles of the mind, you don't "put them in"
they come with the territory.

> The sort of AI I would want built wouldnt
> have any of the characteristics which would attract my empathy

And the AI being as smart as it is will realize you have no empathy for it
and may just return the favor. I don't think it would be a good idea to get
on the wrong side of a Jupiter brain.

  John K Clark



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT