From: Heartland (firstname.lastname@example.org)
Date: Thu Dec 22 2005 - 00:41:37 MST
Would it make more sense to replace CEV with the concept of goodness? It
seems to me that implementing an invariant of goodness for an initial
dynamic should encompass all of the goals of Friendliness while such an
initial dynamic system should be more tractable than building it with CEV
(emphasis on the uncertainty of "seems" and "should").
Keeping in the spirit of Yudkowsky's design of FAI, the good AI would
perform, at each step, preferably provable actions that would always be good
according to an ever-evolving moral judgment subsystem while the supergoal
of this good AI would be to improve that judgment system for better
approximation of the concept of goodness.
The most important question is, of course, whether good AI would do as
"good" of a job as perfectly working Friendly AI. My initial opinion on
this is that it should since Friendliness is an aspect of doing good,
meaning that all of good AI's actions would necessarily be Friendly. I
imagine that such an AI would eventually implement an optimal version of
what we mean by Friendliness (a good thing) as a consequence of doing good.
Initial dynamic of such an AI would obviously have to be jump-started by
human moral judgment system with human concept of goodness which would
ultimately transcend beyond human value system and include everyone and
One of the main practical benefits of this version of AI over CEV that might
be possible is that it could perhaps further shift the burden of
implementing moral framework from humans to an AI, thus decreasing the
amount and complexity of necessary human work. Obviously, this assumes that
"goodness" is easier (and sufficient) to implement than Friendliness which
might not be true in practice.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT