From: Thomas McCabe (firstname.lastname@example.org)
Date: Wed Mar 19 2008 - 17:43:38 MDT
On Wed, Mar 19, 2008 at 6:44 PM, Nick Tarleton <email@example.com> wrote:
> On Wed, Mar 19, 2008 at 6:13 PM, <firstname.lastname@example.org> wrote:
> > On the other list, I argued that the Friendliest response possible to a win-win ruse was to
> > a) repeatedly declare that anyone caught running such a ruse HAD TO and WOULD be punished by extracting restitution to the exact extent of the expected utility of their ruse PLUS not be eligible to receive similar restitution from others (by being forced to give such to a common pool to pay the expenses of those extracting the restitution) for an equal amount of utility
> > AND
> > b) *always* follow through on the declaration
> > That's the best/only way that I can currently come up with to make such a ruse unpalatable to an intelligent self-interested unFriendly.
> That - like all reciprocal checks on unFriendliness - relies on _being
> able to punish_, which requires that all entities are more or less
> similarly powerful, and no entity can kill a really large number of
> others before being stopped. It works well enough between humans, not
> at all between humans and superintelligences.
It *doesn't* work well enough, even among humans. During the past
century, reciprocal punishment has resulted in two major world wars,
several major genocides (and many minor ones), thousands of
nuclear-tipped ICBMs, and countless small-scale feuds. The only reason
this system is still in place is that the alternative (world
government) is too prone to abuse.
-- - Tom http://www.acceleratingfuture.com/tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT