Can't afford to resuce cows (was Re: Arbitrarily decide who benefits)

From: Tim Freeman (tim@fungible.com)
Date: Tue Apr 15 2008 - 19:37:45 MDT


From: Jeff Herrlich <jeff_herrlich@yahoo.com>
>Why not make the beneficiaries all sentient/conscious beings? The
>evolutionarily designed aspect of selfishness, may be a bit of a
>problem. [Not that I'm beyond selfishness, on occassion -
>unfortunately].

The choice of jargon here sounds suspiciously like an attempt to
implement Mahayana Buddhism. Cool!

I think I know how to deal with selfishness. There's two types:

* Simply not caring about the other person. For example, I want me to
be fed but I don't care much whether you get fed. If the AI cares
about me, and about you, it will tend to try to get both of us fed.
Hunger provides more-than-linear motivation as you get hungrier, and
it's likely to figure this out and feed us until we're about equally
hungry, assuming it cares about us equally. This is relatively simple.

* Wanting higher status than the other person. For example, I want a
bigger car than you, and if you get a bigger car I'll be less happy.
To cope with this, the AI has separate parameters for respect and
compassion. The AI's respect is its desire to avoid doing harm to
others (as compared to what would happen to them if the AI took no
action), and compassion is the desire to benefit others. The trick is
to tune the respect parameters so the AI doesn't get involved in
trivial conflicts (such as our car-buying contest) but it does get
involved to prevent violent crime (you don't want respect from the
mugger-to-be to stop it from taking his gun as he's travelling toward
a forseeable mugging). More pesky parameters to arbitrarily decide. :-(.

But on to "sentient/conscious"...

The dictionary I looked at defined "sentient" to mean "conscious", so
there's only one word there to wonder about. I'm not sure I have a
definition of "conscious" that I'd be willing to try to implement.

But nevermind that, I'm too conflict-averse to make the attempt. The
Buddhists say cows (and other mammals) are conscious. If humans eat
cows, and my AI is influenced more by empathy for sentient beings than
by respect for cow butcherers, it will try to stop the cows from being
eaten. But the problem is that the humans have guns and will start
shooting at the AI (or its implementor) if it stops them from kiling
and eating cows. In contrast, the cows do not have guns. So trying
to save the cows would make the AI (and its implementor) targets for
no political benefit.

So no Buddhism implementation from me today.

I think we need some natural balance between political expediency and
some fuzzy idea of consensus morality. Maybe if it looks human and
passes some genetic test for being human, we let it benefit, then once
someone figures out to create the aforementioned brain-damaged horde
to manipulate the AI we retroactively fix the population of
beneficiaries to be the set of humans in existence before the
horde-creation began. Leave the issue of how to deal with uploads
until later.

-- 
Tim Freeman               http://www.fungible.com           tim@fungible.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT