From: Patrick Crenshaw (firstname.lastname@example.org)
Date: Tue Aug 15 2006 - 01:32:41 MDT
I've done some thinking about this.
The first conclusion I came to is that the morality of an action has
to do with the amount that it changes the integral of some Value
function over all space at t=infinity. If you give me any moral
system, I can give you a Value function like this that would describe
The second thing is that for morality to be objective (and I think
that it is; the idea that morality is just up in the air seems to me
like people just aren't trying hard enough) the Value function must be
a physically measurable quantity.
Next there is the idea that the Value of an object is in tow parts:
the intrinsic Value, and the derived Value. The intrinsic value is is
just the Value of the matter in the object existing and being in that
particular configuration. The derived Value is the difference in the
total Value of the universe at t=infinity for the object continuing to
exist and for it ceasing to exist at the time that the derived value
is evaluated. That's just a fancy way of saying that the derived Value
is the effect that an object has on the Value of everything else.
And lastly, (and I won't explain why right now because I am terrible
at doing so) the intrinsic value function is something like the Gibbs
free energy. The idea is that it is the complexity of an object (the
log of it really), but taking into account that a highly complex
object at zero T isn't going to affect much. This is the part I am
least sure of, but it's going to be something like that.
On 8/15/06, Anthony Mak <email@example.com> wrote:
> Dear all,
> Does anyone know any work or paper people have done in
> the past or present about how to measure (quantify) morality?
> The motivation is, for a learning system to learn how to be
> moral, I believe it is necessary to have an objective function
> to measure how "moral" the machine already are, so that
> a machine learning alogorithm can work. Are there any
> works for example in philosophy where people attempted
> to device scheme/method/framework to measure morality?
> At the moment, I can only imagine using some questionaire
> type query to attempt to measure a person's "moral IQ",
> be it a normal person or machine person.
> PS. I guess another approach is to try to find all the +ve
> and -ve effects from an agent's actions and try to sum them
> Any reference to papers or books or other source will be
> extremely helpful.
> Anthony Mak
> No virus found in this outgoing message.
> Checked by AVG Free Edition.
> Version: 7.1.405 / Virus Database: 268.10.10/418 - Release Date: 14/08/2006
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT