Re: Volitional Morality and Action Judgement

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Mon May 24 2004 - 11:00:58 MDT


On Mon, May 24, 2004 at 11:11:04AM -0400, Ben Goertzel wrote:
> > Ben Goertzel wrote:
> > >
> > > We've had this discussion before, but I can't help pointing out
> > > once more: We do NOT know enough about self-modifying AI systems
> > > to estimate accurately that there's a "zero chance of accidental
> > > success" in building an FAI. Do you have a new proof of this that
> > > you'd like to share? Or just the old hand-wavy attempts at
> > > arguments? ;-)
> >
> > Ben? Put yourself in my shoes for a moment and ask yourself the
> > question: "How do I prove to a medieval alchemist that there is no
> > way to concoct an immortality serum by mixing random chemicals
> > together?"
>
> Pardon my skepticism, but I don't believe that the comparison of
>
> A) your depth of knowledge about FAI, compared to mine
>
> with
>
> B) modern chemical, physical and biological science, versus the
> medieval state of knowledge about these things
>
> is a good one.

That would be true, if it was what he was comparing.

What he's actually comparing is belief in the possibility of
accidentally creating FAI with belief in the possibility of accidentally
creating an immortality serum. Actually, he wasn't even comparing even
that: he was comparing the risk factors associated with acting as though
each of those beliefs mirrored reality.

We now know the latter is impossible (or at least we think we know it).
At this point, we have no idea if the former is possible or not; that
doesn't change the fact that *trying* to acheive the former is a very,
very bad idea, which is what the 90% of his example that you snipped was
talking about.

> Next, a note on terminology. When you said "it's impossible to create
> a FAI by accident" I saw there were two possible interpretations
>
> 1) it's impossible to create an FAI without a well-worked-out theory
> of AI Friendliness, just by making a decently-designed AGI and
> teaching it
>
> 2) it's impossible to create an FAI without trying at all, e.g. by
> closing one's eyes and randomly typing code into the C compiler
>
> Of course, 2 is almost true, just like a monkey typing Shakespeare is
> extremely unlikely. Since this interpretation of your statement is
> very uninteresting, I assumed you meant something like 1. My
> statement is that, so far as I know, it's reasonably likely that
> building a decently-designed AGI and teaching it to be nice will lead
> to FAI.

You've just added a condition to your base conditions and used the
modified statement as a logical outgrowth of the base conditions,
without any chain of reasoning at all. Your base conditions do *not*
say anything about teaching it to be nice.

-Robin

-- 
http://www.digitalkingdom.org/~rlpowell/  ***  I'm a *male* Robin.
"Many philosophical problems are caused by such things as the simple
inability to shut up." -- David Stove, liberally paraphrased.
http://www.lojban.org/  ***  loi pimlu na srana .i ti rokci morsi


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST