Re: Can't afford to resuce cows (was Re: Arbitrarily decide who benefits)

From: Samantha Atkins (sjatkins@gmail.com)
Date: Sat Apr 26 2008 - 02:48:31 MDT


Tim Freeman wrote:
> From: Jeff Herrlich <jeff_herrlich@yahoo.com>
>
>> Why not make the beneficiaries all sentient/conscious beings? The
>> evolutionarily designed aspect of selfishness, may be a bit of a
>> problem. [Not that I'm beyond selfishness, on occassion -
>> unfortunately].
>>
>
> The choice of jargon here sounds suspiciously like an attempt to
> implement Mahayana Buddhism. Cool!
>
> I think I know how to deal with selfishness. There's two types:
>
> * Simply not caring about the other person. For example, I want me to
> be fed but I don't care much whether you get fed. If the AI cares
> about me, and about you, it will tend to try to get both of us fed.
> Hunger provides more-than-linear motivation as you get hungrier, and
> it's likely to figure this out and feed us until we're about equally
> hungry, assuming it cares about us equally. This is relatively simple.
>
> * Wanting higher status than the other person. For example, I want a
> bigger car than you, and if you get a bigger car I'll be less happy.
> To cope with this, the AI has separate parameters for respect and
> compassion. The AI's respect is its desire to avoid doing harm to
> others (as compared to what would happen to them if the AI took no
> action), and compassion is the desire to benefit others. The trick is
> to tune the respect parameters so the AI doesn't get involved in
> trivial conflicts (such as our car-buying contest) but it does get
> involved to prevent violent crime (you don't want respect from the
> mugger-to-be to stop it from taking his gun as he's travelling toward
> a forseeable mugging). More pesky parameters to arbitrarily decide. :-(.
>

If I was the AGI (and though more or less like I do today) and was
charged with the ultimate well-being of all sentients then my solution
would be simple.

1) upload all sentients into worlds identical to their current worlds or
of their choice for more evolved sentients;
2) by design all sentients have up to the moment back-ups;
3) let them live by whatever rules (or defaults from their previous
conditions) that they choose;
4) if they off themselves or 'die' or come to serious injury they are
reinstate but likely without much memory of what came before but loaded
up with issues to work through from before;
5) churn so each sentient becomes more and more enlightened / reaches
its highest potential at its own pace;
6) interfere only as judiciously and minimally as possible to avoid
forcing the outcome to something other than what the sentient would
ultimately choose.

In short a full VR multiverse with perfect reincarnation overseen by a
fully benevolent God/Mind. Otherwise I think universal or perfect
Friendliness is a rather nasty farce.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT