RE: Defining Right and Wrong

From: Billy Brown (BBrown@RealBusinessSolutions.com)
Date: Sat Nov 23 2002 - 20:39:58 MST


Behalf Of Ben Goertzel wrote:
> Michael, as I see it, ethical/moral values cannot be tested according to
> "how usefully they describe reality."
>
> They are prescriptive rather than descriptive.

I think that is mostly an artifact of how humans think about morality. We
tend to focus on "Fundamental Principles" that are really just heuristics,
and forget about the goal of the whole enterprise.

But really, most systems of ethics are attempts to solve a global
optimization problem that could be loosely described as "for all significant
entities, maximize the extent to which each entity acheives positive
outcomes while minimizing negative outcomes". In practice, human psychology
and sociology conspire to ensure that the definitions of "significant
entities", "positive outcome" and "negative outcome" become political
footballs, but that isn't a necessary failing. If you get out of the
business of telling other people what they 'ought' to want, and refrain from
making initial assumptions about whose opinion matters, you can make the
problem fairly objective.

For an entity with perfect knowledge and infinite processing power, in a
finite deterministic universe, perfect ethics would boil down to a fairly
well-defined procedure:

1) Determine every possible set of actions that could be taken between the
present and the end of the universe. Create a model of the entire universe
corresponding to each set of actions.
2) For each of these hypothetical futures, pick out the life history of
every entity capable of having subjective experiences. Then determine the
desireability of that hypothetical future by the subjective standards of
each of these entities.
3) Pick the hypothetical future with the highest net desireablility, and
take the actions that lead to it becoming reality.

Granted, there are some problems with this approach (for example, how do you
weigh the preferences of goldfish, bald eagles, humans, and transhuman
entities against each other?), but from this perspective they seem more like
practical engineering challenges that fundamental imponderables.

Now, this omniscient super-thinker is obviously impractical, but we can view
real ethical systems as attempts to approximate its performance in the real
world. In principle this gives us an objective way to compare competing
ethical systems: feed them data, apply their recommendations, and see how
well they work out. In practice it is very hard for humans to do that in
normal social situations, but that doesn't invalidate the principle. It just
means that putting ethics on an objective footing would require the
invention of a new experimental method capable of generating good data
(perhaps you could experiment on AI-based society models, for example).

Billy Brown



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT