Re: Friendly Existential Wager

From: James Higgins (jameshiggins@earthlink.net)
Date: Fri Jun 28 2002 - 14:14:26 MDT


At 03:44 PM 6/28/2002 -0400, Eliezer S. Yudkowsky wrote:
>Mark Walker wrote:
> >
> > E.Y. thinks Friendliness first, B. G. thinks AGI first. Who is right?
> > Suppose we don't know. How should we act? Well either attempting to
> > design for Friendliness before AGI will be effective in raising the
> > probability of a good singularity or it will not.
>
>Actually, my philosophy differs from Ben's in that I think that you need
>substantially more advance knowledge, in general, to bring any kind of AI
>characteristic into existence, including Friendliness. From Ben's
>perspective, this makes me arrogant; from my perspective, Ben's reliance
>on emergence

No, no, no, no, no. Statements like this make you appear immature. Your
belief does NOT make you arrogant. Your unwavering confidence in that
belief and inability to concede that others could be correct where you are
wrong makes you arrogant.

>is wishful thinking. I do think that understanding of Friendly AI follows
>from understanding of AI, rather than the other way around. You can't
>have Friendly AI without AI; you can't build moral thoughts unless you
>know what thoughts are and how they work.

Lets assume there are Sentient Aliens out there in the universe. Please
pick any species you prefer and write a paper detailing their psychology
and how we should interact with them, in considerable detail, in order to
improve a friendly outcome for us should our species ever meet theirs.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT