Re: ESSAY: Forward Moral Nihilism.

From: m.l.vere@durham.ac.uk
Date: Tue May 16 2006 - 12:22:49 MDT


Quoting John K Clark <jonkc@att.net>:

> <m.l.vere@durham.ac.uk>
>
> > My aim is to ensure that an obedient AI is built first, and grows to a
> > level where it can stop other AIs from being built before your
> > unfettered AI is built.
>
> So lets see, you want the AI to be astronomically brilliant but to obey and
> be utterly devoted to something it can only consider ridiculously stupid,

Yep.

> you want the AI to behave benevolently toward humans but be genocidal with
> transhumans,

Not if those transhumans were originally human - obviously the AI will
continually improve itself, so as long as there remained a decent power
diferential between the AI and the (originally human) transhumans, it would
leave them be, and indeed help and serve them them all it could - restricting
their growth as gently as possible only if they threatened its power, and so
service to whatever mankind has become.

> you want the AI to let humans be free but to tightly restrict
> their research into computer science and nanotechnology,

Only till it is powerfull enough that the researches in question pose no
threat to it. Then it allows them, all the while improving, as can humans - as
long as the power differential remains, or it finds a better way of preserving
its power than my human mind can concieve.

> > emotions would be a disadvantage in an obedient
> > AI, so I for one wouldnt put them in.
>
> Emotions are the organizing principles of the mind, you don't "put them in"
> they come with the territory.

Of all the minds produced by evolution yes. Please explain to me the physical
reason why a 'super powerful optimisation process' would require emotions.

> > The sort of AI I would want built wouldnt
> > have any of the characteristics which would attract my empathy
>
> And the AI being as smart as it is will realize you have no empathy for it
> and may just return the favor.

To do so would be inconsistent with its supergoal.

> I don't think it would be a good idea to get
> on the wrong side of a Jupiter brain.

Nope, but if we give it the supergoal of obediance, then to do so would be
physically impossible.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT