Re: superintelligence and ethical egoism

From: Samantha Atkins (samantha@objectent.com)
Date: Fri Jun 01 2001 - 23:00:47 MDT


Mitchell J Porter wrote:
>
> Jimmy Wales said:
>
> > (It wouldn't be very intelligent if it were anything but...)
>
> A superintelligence whose supreme goal is X only needs to care
> about itself insofar as its own continued existence will assist
> the achievement of X. If its goal is to blow up the earth, then
> once that is done it can attach zero value to further
> self-preservation and shut down entirely.

It would not be exactly a "superintelligence" if it had only one
free floating goal and no values at all beyon acheiving that
goal. It would be rather moronic.

>
> and:
>
> > I don't think we should fear this, by the way. We should hope for it.
>
> (this being egoism in a superintelligence)
>
> If there is a large enough power differential between a superintelligence
> and us, egoism will not imply any sort of mutualism. If it doesn't
> care about us, if we have nothing to offer it, and if we're in its
> way, we're toast.

If that is all there is to ethics, then we're toast. But I
don't believe it is. I believe sentient life is to be prized
regardless of whether a particular sentient or type of sentient
has something to offer you beyond its existence or not. I think
that a real superintelligence versus a computing engine without
any true inner life will conclude eventually (hopefully sooner
rather than later) that its own existence and by extension the
existence of all sentients is made more secure by the valuing of
sentient beings and by a minimum set of agreed rights that lead
to a minimum level of cooperation and peaceable co-existence.

An entity that stomps on humans today out of "having no use for
them" is open to being stomped on by a more capable entity
tomorrow that has the same lack of ethical constraints.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT