From: Stefan Pernar (firstname.lastname@example.org)
Date: Mon Apr 28 2008 - 19:13:39 MDT
On Tue, Apr 29, 2008 at 7:57 AM, Thomas McCabe <email@example.com>
> On Mon, Apr 28, 2008 at 7:37 PM, Stefan Pernar <firstname.lastname@example.org>
> > On Tue, Apr 29, 2008 at 7:10 AM, Thomas McCabe <email@example.com>
> > wrote:
> > Interesting thought experiment. However perfect friendliness under all
> > circumstances really is not the goal aimed for as it is an unrealistic
> > unobtainable ideal.
> By "perfect friendliness", I mean that the FAI should always make the
> Friendly decision, not that the outcome should always be Friendly
> (which is impossible; see the debate at
The philosophical point that friendliness is inherently limited is well
taken. For practical purposes however I think it is important to aim for 'as
friendly as possible'.
Intelligence is defined as an agent's ability to maximize a given utility
Friendliness can be expressed as an agent's utility function.
An agent, who's utility function is to be friendly will be friendlier the
more intelligent it becomes.
-- Stefan Pernar 3-E-101 Silver Maple Garden #6 Cai Hong Road, Da Shan Zi Chao Yang District 100015 Beijing P.R. CHINA Mobil: +86 1391 009 1931 Skype: Stefan.Pernar
This archive was generated by hypermail 2.1.5 : Wed Jun 19 2013 - 04:01:37 MDT