Date: Wed Mar 19 2008 - 16:13:46 MDT
>> The nasty machine is using force when it is using it's nanobots
>> AGAINST MY WILL. It is *corrupting* my will and/by forcibly altering
>> my goals.
> It's no different if it implants schema and goal pathways into your
> brain that change the way you think, through nanobots or through
> conversation. Nanobots would be the more efficient option, however. You
> label the use of nanobots as force, but I don't see that as a true
> distinction. The same effect can be accomplished through conversation.
I can accomplish the same effect of seeing you lifeless by either outliving you (conversation) or murdering you (nanobots). Do you not see a true distinction between those two or can you provide a reason why this is not a correct analogy?
> In my opinion, the best strategy that is in one individual's best
> self-interest, is to convince everyone else that one's actions are
> ethical, while at the same time taking advantage (getting much more
> value for little value -- giving pennies while getting dollars) under
> cover of ethics.
> I would call it a "win-win ruse". Convince the other person (or social
> group, or whatever) that the deal is win-win, when it actually is
> win-lose. In my opinion, most religions are excellent at practicing the
> "win-win ruse".
Excellent! I was wondering when someone would come up with that here. The *only* fundamental problem with the "win-win ruse" is that it truly would work -- to the extent that human beings are already extensively on guard against it. Why do you think that Thomas McCabe is having such a problem with me? Why do you have such a problem (that I share) with most religions? Why do you think that we instinctively offer forgiveness for errors but start getting vengeful when we've been intentionally mislead? What is the basis for the truism - "If it looks too good to be true, it probably is"?
On the other list, I argued that the Friendliest response possible to a win-win ruse was to
a) repeatedly declare that anyone caught running such a ruse HAD TO and WOULD be punished by extracting restitution to the exact extent of the expected utility of their ruse PLUS not be eligible to receive similar restitution from others (by being forced to give such to a common pool to pay the expenses of those extracting the restitution) for an equal amount of utility
b) *always* follow through on the declaration
That's the best/only way that I can currently come up with to make such a ruse unpalatable to an intelligent self-interested unFriendly.
This archive was generated by hypermail 2.1.5 : Mon May 20 2013 - 04:01:20 MDT