Re: Volitional Morality and Action Judgement

From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Sat May 29 2004 - 16:36:09 MDT


Mark Waser,

You wrote:
>
> Once Eliezer is no longer willing to engage with
> an individual who is probably closest to him in
> terms of understanding/drive/etc., then,
> <italicized>in my way of looking at things</italics>
> Eliezer has forfeited a huge chunk of his
> responsibility/moral authority/effectiveness/whatever.
>

Being unwilling to engage another in debate forfeits nothing, except the
engagement.

>
> I also think that there are a number of missed
> opportunities both in the theory and in the attempt
> at spreading the meme.
>

Could you detail those opportunities and why you consider them missed?

>
> 1. Why do you believe that a single FAI is the best strategy?
>

a) It is simpler to create.

b) Having one being around with the capability of destroying humanity is
less risk than having more than one, in the same way as having one human
being with a Pocket Planetary Destruct (TM) device is less risky than having
more than one.

>
> To me, it's a single point of failure or like a
> mono-culture of crops in agriculture. One mistake
> or one disease and POOF! smiley faces everywhere.
>

This is a false analogy. With crops there are many species and varieties
that will approach the desired result more or less closely. Choosing a
variety that doesn't produce well doesn't end humanity. With an RSI AI the
outcome becomes binary: some sort of continued life, or annihilation. This
is not a false dichotomy; life/death is about as binary as you can get.

>
> Why do you think that NASA uses multiply redundant
> systems?
>

The multiply redundant systems for NASA's launch vehicles were created
because having them reduces the overall risk of failure. This is not true
of RSI AI. An RSI AI is analogous to an entire launch vehicle that might
kill you. If launching the first one doesn't kill you, then you might try
again, but otherwise you're dead.

>
> 2. Why do you believe that relying on Eliezer and
> only Eliezer is a good strategy.
>

It is a lousy idea. I don't believe MW ever said it was a good idea.

>
> [snip] I do expect serious engagement with the most
> seriously engaged other participants.
>

It *is* always enjoyable when that happens, and often informative, but
perhaps you shouldn't always expect it. Sometimes people simply disagree
about ideas, we *are* all running on slightly different brainware and
different knowledge bases. Once an idea has been beaten to death, with
little progress on either side, then it is sometimes worthwhile to
disengage.

>
> Eliezer has devolved to "everything is too dangerous"
> but "I'm much too busy to discuss it or even write it
> up for anybody" and I think that is a REALLY BAD
> THING(tm).
>

I suspect this is a REALLY BAD THING(tm) to you because you may be relying
on Eliezer. My advice is: don't. One of SIAI's stated goals is to grow its
programmer team, and with that team improve and develop FAI. Either
donating yourself, or persuading others to donate would help tremendously
toward that goal. And part of that goal is: "making it so that Eliezer is
neither considered to be nor in fact a failure point or bottleneck."

In the Ben vs. Eliezer debates each party has a set of cognitive models they
are using to reason about the ideas. Ben's model projects outcomes along
one trajectory, Eliezer's along another. The models are complex, and would
not be easy to communicate using human language, even if both parties had
perfect introspection, which they do not. The parties may never agree until
one of them builds a working AI and points at it saying: "There. That's what
I'm talking about."

Michael Roy Ames



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT