Re: More silly but friendly ideas

From: Vladimir Nesov (robotact@gmail.com)
Date: Tue Jun 10 2008 - 09:04:28 MDT


On Tue, Jun 10, 2008 at 6:02 PM, Stathis Papaioannou <stathisp@gmail.com> wrote:
>
> What I mean by an ethical axiom is something like "causing suffering
> to conscious beings is bad". We can then take any proposed action and,
> examining it to see whether it causes suffering to conscious beings,
> decide whether or not it is bad. But a sadist is free to come along
> and proclaim that, on the contrary, causing suffering to conscious
> beings is good. There is no objective way of deciding which position
> is the "correct" one, no matter how intelligent you are. Values are,
> in the final analysis, arbitrary.
>

Such axioms are too crude, and will break down when situates becomes
more complex. Asking an AI to decide which actions are ethical is a
complex wish ( http://www.overcomingbias.com/2007/11/complex-wishes.html
), and it's easy to run into a situation where a simple set of
"ethical axioms" break down. Ethics is arbitrary in the sense that you
can't derive complex ethics from nothing, but you can't derive it from
a handful of simple axioms either. And maybe, at a metaethics level,
you can say that an action is ethical if it can be shown to be such
based on the inference from the structure of the real world and
actions of target ethical agents in it. I'd say that ethics is a
generalized ethical agent. You take a human, its decisions over
situations to which it's applicable (observed to be in in the real
world), and generalize them to other situations, including alternative
structure of agent itself up to the dissolving of a notion of separate
agent, but grounded in original structure and applicability of the
agent. This way, you obtain a measure over available actions,
including those which original agent would've never thought of.

-- 
Vladimir Nesov
robotact@gmail.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT