From: Robin Lee Powell (firstname.lastname@example.org)
Date: Tue Jun 06 2006 - 12:42:59 MDT
That is indeed very different; as I said, the rant wasn't much
directed at you.
I could, of course, talk about my *own* willingness to guarantee my
own future friendliness, but that's certainly not the type of
guarantee you're talking about, so there's not much point.
On Tue, Jun 06, 2006 at 11:33:15AM -0700, Ben Goertzel wrote:
> Hi Robin,
> Perhaps I mis-stated Hugo's opinion...
> I am sure that he does not think a kind, moral superintelligent being
> is IMPOSSIBLE.
> What he thinks, rather, is that making any kind of GUARANTEE (even a
> strong probabilistic guarantee) of the kindness/morality/whatsoever of
> a massively superhumanly intelligent being is almost sure
> impossible... no matter what the beings *initial* design...
> This is a very different statement.
> -- Ben
> On 6/6/06, Eliezer S. Yudkowsky <email@example.com> wrote:
> >Robin Lee Powell wrote:
> >> It blows my mind that any intelligent and relevantly-knowledgeable
> >> person would have failed to perform this thought experiment on
> >> themselves to validate, as proof-by-existence, that an intelligent
> >> being that both wants to become more intelligent *and* wants to
> >> remain kind and moral is possible.
> >> Really bizarre and, as I said, starting to become offensive to me,
> >> because it seems to imply that my morality is fragile.
> >While I agree in general terms with your conclusion, I feel obliged to
> >point out that being personally offended by something is not evidence
> >against it.
> >Eliezer S. Yudkowsky http://intelligence.org/
> >Research Fellow, Singularity Institute for Artificial Intelligence
-- http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/ Reason #237 To Learn Lojban: "Homonyms: Their Grate!" Proud Supporter of the Singularity Institute - http://intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT