From: Jeff Herrlich (firstname.lastname@example.org)
Date: Mon Apr 14 2008 - 20:01:10 MDT
Why not make the beneficiaries all sentient/conscious beings? The evolutionarily designed aspect of selfishness, may be a bit of a problem. [Not that I'm beyond selfishness, on occassion - unfortunately].
Tim Freeman <email@example.com> wrote:
From: "Nick Tarleton"
>My point is that the goal system of an FAI is not arbitrary - it's
>tightly constrained by our current values and the values implicit in
>the changes we would make to ourselves, and can't be arbitrarily
>tinkered with to resolve paradoxes without serious thought.
It seems to me that some essential features of the FAI are arbitrary.
The most important arbitrary feature I can see, for the purpose of
getting a political consensus to build the thing, is who benefits.
The set of entities that benefit might be all presently-existing
humans, or it might be some smaller set of human individuals, or it
might be all mammals, or one of many other possible choices. Does
anyone see a strategy for bringing rationality to bear on this
decision? Otherwise it's arbitrary and there will probably be a
tedious and depressing political battle over it.
-- Tim Freeman http://www.fungible.com firstname.lastname@example.org
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT