Moral standards (was Guide AI theory)

From: Philip Goetz (philgoetz@gmail.com)
Date: Tue May 16 2006 - 09:05:50 MDT


On 5/14/06, m.l.vere@durham.ac.uk <m.l.vere@durham.ac.uk> wrote:
> 2. As morality is artificial, there is no one (or finite number of) 'correct'
> moralit(y)/(ies). Thus it would be better for each individual posthuman to be
> able to develop his/her/its own (or remain a nihlist), than have one posthuman
> morality developed by a sysop.

Even if you completely disbelieve in morality, objective ethics,
good and evil, right and wrong -

We call morality a "standard". This is true in the same way that
Windows is a standard. It is better FOR YOU to have a small number of
standards - even if they aren't the ones you would have developed -
than to have everyone operating according to a different standard.
The transaction costs are too high when there are no standards. This
is true of moral systems as well as of operating systems.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT