From: Nick Tarleton (firstname.lastname@example.org)
Date: Wed Mar 19 2008 - 16:44:13 MDT
On Wed, Mar 19, 2008 at 6:13 PM, <email@example.com> wrote:
> On the other list, I argued that the Friendliest response possible to a win-win ruse was to
> a) repeatedly declare that anyone caught running such a ruse HAD TO and WOULD be punished by extracting restitution to the exact extent of the expected utility of their ruse PLUS not be eligible to receive similar restitution from others (by being forced to give such to a common pool to pay the expenses of those extracting the restitution) for an equal amount of utility
> b) *always* follow through on the declaration
> That's the best/only way that I can currently come up with to make such a ruse unpalatable to an intelligent self-interested unFriendly.
That - like all reciprocal checks on unFriendliness - relies on _being
able to punish_, which requires that all entities are more or less
similarly powerful, and no entity can kill a really large number of
others before being stopped. It works well enough between humans, not
at all between humans and superintelligences.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT