RE: Defining Right and Wrong

From: Billy Brown (BBrown@RealBusinessSolutions.com)
Date: Sat Nov 30 2002 - 10:40:53 MST


Samantha Atkins wrote:
> Even if you create a model that is logically inconsistent? You
> cannot isolate the eseential problem by throwing out essential
> parts of the problem. Only the relatively unimportant mass of
> detail should be removed or a logical consistent analogue to the
> real problem created.

Of course. But I think you mistake my point.

I think that the central underlying problem of ethics is one that can
roughly be summarized as "How do we act so as to do as much good (or at
least as little harm) as we can?" We would like to be able to do so
reliably, in any situation, and we would ideally like to choose the very
best course of action instead of just a middling-good one.

My impossible ethics engine illustrated what it woould take to actually
acheive this goal in the absence of any prior knowledge about ethics. It
differs from more conventional treatments of the subject in only two
significant ways that I can see:

1) It makes the immense, overwhelming complexity of the problem explicit
instead of sweeping it under the rug.
2) Instead of imposing a universal standard for what constitutes a "good" or
"bad" outcome, it evaluates each actor's experiences by its own standards.

So, do you see the goal of ethics as somethic different? Or is is just that
you don't see any point in thinking about the problem in this way?

> "Perfect" knowledge is a logical absurdity. Do you deny this?
> If so please show how it is possible. Information will always
> be limited, finite rather than infinite. Planning time will
> always be finite. Even a full blown AGI Power does not work by
> magic pixie dust that can do even the logically impossible.

Complete knowledge of the consequences of an action is only a practical
impossibility, not a logical one. In the real world it can never be
acheived, thanks to problems ranging from quantum uncertainty to the chaotic
nature of hman minds and social systems. But one can easily imagine a
simulated "toy world" in which this is not true - make a version of Life
that runs for a finite number of turns, and you can easily predict all the
consequences of any given action.

Besides, I'm not saying that perfect knowledge is a requirement of good
ethics, as I mention below.

> Your approach is utterly unworkable so it cannot be said to be
> "more productive".

Really? Let's take a look at this.

Obviously, no one will ever be able to built my hypothetical ethics engine
(at least not in any universe that resemples this one). But that is not a
new problem in science. You don't have to have a perfect model before you
can start thinking about ethics.

The logical approach would be to treat this like any other difficult
research problem. Start by trying to build a model that works for some
easy-looking special case, (say, simplified economic transactions between
really stupid agents, or collaboration/defection in games a little more
complex that Prisoner's Dillema). Once you find a way to model one toy
domain, you can use the experience you gained to tackle a harder one.

Eventually, this kind of research will reach the point where it can handle
agents and societies complex enough to serve as a model for actual humans.
At that point you can create a science of experimental ethics, in which you
run experiments comparing different sets of behavioral rules to see how well
they work. Unlike abstract philosophical reasoning, this kind of
experimentation would produce actual data about how systems perform under
different circumstances, what situations they handle well, where their weak
areas lie, and so on.

At this point you've turned the majority of ethics into a combination of
software engineering and experimental science, which to my mind is a
necessary step for any serious Friendly AI project. You (or the AI) can then
go on to build ever-better models, with more predictive power over wider
ranges of circumstances. You can build up deep knowledge about the results
of different notions of "good" and "bad", about the behavior of different
ethics algorithms, and about the effects of environmental changes like new
kinds of minds. And you can ground your conclusions in experimental results,
rather than unfalsifiable argumentation.

That sounds pretty productive to me.

Billy



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT