Re: Arbitrarily decide who benefits (was Re: Bounded population)

From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Mon Apr 28 2008 - 02:22:29 MDT


> > The set of entities that benefit might be all presently-existing
> > humans, or it might be some smaller set of human individuals, or it
> > might be all mammals, or one of many other possible choices. Does
> > anyone see a strategy for bringing rationality to bear on this
> > decision?
> >
> What decision? We are building something many orders of magnitude more
> intellectually capable than ourselves and hopefully it will not eat our
> face. It is a bit odd to being worrying about what primates or other
> biological creatures it will benefit as if we are likely to have much
> reasonable control over that.

The emphasis is on "WE are building..." If we assume that we will have
some degree of control over the final ethical system of the AI (if we
don't, then we are screwed anyway), then we have some degree of
control as to whether other animals are included. Since we probably
won't be able to try more than once, knowing what we need to try is
vital.

> It is certainly a con job to sell it to the
> public and claim tax expropriations to build it on such a basis of being for
> the benefit of the "taxpayers" or some other popular target requiring
> spending gobs of other people's money.

I don't quite see the argument here (unless you're arguing that the
chances of an AI eliminating us are high). If the AI will refrain from
eliminating/enslaving/lobotomising us and if it provides great
benefits to all, then it seems that this has the strongest case for
coercive taxation (as the expected benefits far outweigh such things
as social security or a functioning police force).

> > Why not make the beneficiaries all sentient/conscious beings?
> >
> What the heck does that even mean? Benefits according to whom?

Survival, and lack of excessive pain, would be reasonable benefits
even for dumb animals.

> To the
> best guesstimate of the best benefits each would desire if each of the
> beings was much smarter and more sane and more generally enlightened than it
> is or perhaps even dreams or can dream of being? ARGH. Hopefully the
> AGI will not be nearly so sloppy in its thinking. Hopefully we will not
> wait to build AGI until we get sufficient political agreement that we have a
> workable plan for uplifting the sea slug.

I hope we don't put off building an AGI until we know how to uplift
the sea slug. I do hope, however, that we design an AGI that will
display some ethical behaviour towards at least some animals.

Stuart



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT