Understanding morality (was: SIAI's flawed friendliness analysis)

From: Mark Waser (mwaser@cox.net)
Date: Sat May 10 2003 - 09:55:23 MDT


Eliezer said: "It is not a trivial thing, to create a mind that embodies the
full human understanding of morality. There is a high and beautiful sorcery
to it."

Ben said: "I am not sure it is a desirable or necessary goal, either. The
human understanding of morality has a lot of weaknesses, and a lot of ties
to confusing aspects of human biology. I believe that ultimately an AI can
be *more* moral than nearly any human or human group."

I strongly believe that we shouldn't wait for an AI to understand morality.
I believe that we, as a society, desperately need to develop formal/codified
structures/procedures for representing, making, and defending moral (and
other) decisions that human beings can use as well as, eventually, Friendly
AIs. Obviously, these need to be Bayesian and need to take into account the
very different "facts" that everyone brings to the table. My assumption is
that such will, at the very least, make impossible many of the inconsistent
arguments that are rife today and make explicit the hidden agendas that
everyone brings to the table while documenting everything for posterity.
Imagine a world where politicians are required to document their views in
such a system (or risk being ridiculed or ignored) and public debates are
informed by an accurate, IMPARTIAL in-depth statement of all views.

Clearly, any formal/codified process can easily be automated but many of the
decisions that must be made as part of such a process (such as determing
whether or not two arguments are the same, collapsing/combining argument
trees, or even more importantly summarizing arguments to a reasonable
length) require either human-level or greater intelligence or some further
(formal/codified) process to collect a consensus of participating human
beings. I strongly believe that such is necessarily part of a Friendly AI.
More importantly, it is a part that can be worked on now, independently of
the Friendly AI, and which will provide significant benefits even if it is
never used for a Friendly AI.

I've thought about such structures/processes for years but have never spent
the time necessary to formalize them and attempt to automate them.
Initially, I didn't even think about using such structures for moral
decisions but more as a way to coordinate and document decision-making (or
even research) among (possibly very large) groups of people. I think that
this would be an excellent project with large social pay-offs and that this
group could contribute tremendously to it. I am more than willing to do the
automation part but, obviously, need a lot of help on the design and
formalization parts. Are there others on this list who are interested in
attempting such a thing and willing to contribute ideas and time? If so, I
will start to pull together all of my notes into some sort of coherent
documentation as to where I am, what I've thought about, and where a group
could go from here.

Any comments?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT