From: Anthony Mak (firstname.lastname@example.org)
Date: Mon Aug 14 2006 - 22:01:11 MDT
Does anyone know any work or paper people have done in
the past or present about how to measure (quantify) morality?
The motivation is, for a learning system to learn how to be
moral, I believe it is necessary to have an objective function
to measure how "moral" the machine already are, so that
a machine learning alogorithm can work. Are there any
works for example in philosophy where people attempted
to device scheme/method/framework to measure morality?
At the moment, I can only imagine using some questionaire
type query to attempt to measure a person's "moral IQ",
be it a normal person or machine person.
PS. I guess another approach is to try to find all the +ve
and -ve effects from an agent's actions and try to sum them
Any reference to papers or books or other source will be
-- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.405 / Virus Database: 268.10.10/418 - Release Date: 14/08/2006
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT