From: Stefan Pernar (firstname.lastname@example.org)
Date: Sat Nov 24 2007 - 18:13:12 MST
On Nov 25, 2007 6:46 AM, Tim Freeman <email@example.com> wrote:
> From: "Wei Dai" <firstname.lastname@example.org>
> >To take the simplest example, suppose I get a group of friends together
> >we all tell the AI, "at the end of this planning period please replace
> >yourself with an AI that serves only us." The rest of humanity does not
> >about this, so they don't do anything that would let the AI infer that
> >would assign this outcome a low utility.
> Good example. It points to the main flaw in the scheme -- I can't
> prove it's stable, and a solution to the Friendly AI problem has to be
> stable. Here "stability" roughly means that our Friendly AI isn't
> going to construct an unfriendly AI and then allow the new one to take
> over. However, if I look more closely, I don't know what "stable"
For an AI to be friendly it would have to want to be friendly. The question
if friendly AI is possible is equivalent to asking if one can rationally
want to be friendly.
I wrote a paper that proves that friendliness is an emerging phenomenon
among interacting goal driven agents under evolutionary condition.
The paper is available as PDF under Practical Benevolence - a Rational
Philosophy of Morality<http://rationalmorality.info/wp-content/uploads/2007/11/practical-benevolence-2007-11-17_isotemp.pdf>
-- Stefan Pernar 3-E-101 Silver Maple Garden #6 Cai Hong Road, Da Shan Zi Chao Yang District 100015 Beijing P.R. CHINA Mobil: +86 1391 009 1931 Skype: Stefan.Pernar
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT