Re: Volitional Morality and Action Judgement

From: Mark Waser (mwaser@cox.net)
Date: Sat May 29 2004 - 12:34:22 MDT


Michael,

    Thank you for an excellent reasoned reply (even though I'm going to
argue with it).

> The actual principle is 'you shouldn't do anything you think might be
> an existential risk until you've worked it through and proved to
> a few relevant experts that it probably isn't'.

I fully agree with this statement; however, I'm seeing a lot of debate
between Eliezer and Ben where it's devolved to the point where Eliezer is no
longer willing to fully engage with valid points. Once Eliezer is no longer
willing to engage with an individual who is probably closest to him in terms
of understanding/drive/etc., then, <italicized>in my way of looking at
things</italics> Eliezer has forfeited a huge chunk of his
responsibility/moral authority/effectiveness/whatever.

> You may have noticed that quite a few people on SL4 /aren't/ whining
> about not understanding the theory and how this isn't their fault. Or
> perhaps you missed that?

I've noticed a lot of people asking for clarification where they are going
wrong and not receiving that clarification. I, personally, am not whining
about not understanding the theory. I think that I've got a good grasp on
some of it and am working on the rest. I also think that there are a number
of missed opportunities both in the theory and in the attempt at spreading
the meme.

> Teenagers aren't qualified to judge existential risks or do serious
science,
> period. It is unfortunate that the pathetic state of academic AI forced
one
> to do some of the vital intermediate work. Since everyone else did and for
> the most part continues to do noticeably worse, this criticism while
relevant
> does not strike me as calling for a change in roles.

Change in roles? I'm not sure what you mean. I don't want/expect Eliezer
to change roles. I would like to see him work more collaboratively.

> > Relying upon a single point of failure (meaning both a single FAI and a
> > single you) is incredibly foolish.
>
> This is the best strategy we have for now.

I suspect that we'll end up just having to agree to disagree on this one but
. . . .

1. Why do you believe that a single FAI is the best strategy? To me, it's
a single point of failure or like a mono-culture of crops in agriculture.
One mistake or one disease and POOF! smiley faces everywhere. Why do you
think that NASA uses multiply redundant systems?

2. Why do you believe that relying on Eliezer and only Eliezer is a good
strategy. He's only human and ANY human makes mistakes, has prejudices,
overlooks things, and can always benefit from collaboration.

I really, vehemently disagree that this is even NOT a BAD strategy much less
a good one much less the best strategy.

> This is the best strategy we have for now. However you may have noticed
the
> SIAI seed AI team recruitment call in the last newsletter, so if you think
> you can improve on this by all means send someone in or get in touch
> yourself.

I signed up on the seedaiwannabes list when I first joined SL4 quite some
time ago. I made sure that I was still signed up recently. I've been
getting myself up to speed on a number of things, keeping myself up to speed
on others, and doing some work here and there. Personally, though, I've
often wondered whether the seedaiwannabes list is just a honeypot to keep
dangerous crackpots at bay . . . . :-)

> SL4 is in theory an information-dense debating forum and news feed, not a
> chummy chat list. While in practice we're pretty informal anyway, if you
> want a lighter atmosphere and on-demand snappy explanations please try the
> #sl4 IRC channel.

And my problem is that recently it HASN'T appeared to be information-dense.
I've seen way to much of the refusal to engage and that's what I'm objecting
to. I don't want/need/expect chummy, lighter atmosphere, or even on-demand
snappy explanations for me - - but I do expect serious engagement with the
most seriously engaged other participants.

Anyways, I don't mean to slam you and I'm probably picking on Eliezer a bit
much as well but it seems as if recently (and I've been around for a while),
Eliezer has devolved to "everything is too dangerous" but "I'm much too busy
to discuss it or even write it up for anybody" and I think that is a REALLY
BAD THING(tm).

        Mark



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT