Re: Suicide by committee (was: How hard a Singularity?)

From: James Higgins (jameshiggins@earthlink.net)
Date: Thu Jun 27 2002 - 11:13:51 MDT


At 06:11 AM 6/27/2002 -0400, Eliezer S. Yudkowsky wrote:
>It is not my entire goal. It is nowhere near my goal. My goal is to
>*actually ensure* that any seed AI built is *successfully* Friendly.

So creating an organization to promote the inclusion of Friendliness into
seed AIs is stupid how?

>This is a Singularity problem. You cannot solve Singularity problems with
>silly little human solutions like committees. All you can do is create
>the illusion of effectiveness and authority, while actually involving
>petty politics in the problem and thereby destroying all hope of a correct
>solution. I will say it again: You cannot solve Singularity problems by
>inventing committees. The inability of any human to be entrusted with AI
>morality is a Singularity problem. If the question were getting people to
>trust AI morality, instead of *how to actually do it* - or, more to the
>point, if I was dumb enough to see the issues in those terms - then yes, I
>could have "solved" this problem by creating a committee to decide on AI
>morality, which would have a greater appearance of authority and
>trustworthiness. But you cannot solve Singularity problems like that.
>Political problems, yes; Singularity problems, no.

God damn it. Do you speak English? Do you actually READ what I type? How
many times do I have to say that NO ONE want's to turn the problem over to
committee! As Ben put it this would be an advisory board. What's, your
telling us neither yourself nor anyone working on such things could even
benefit from advice from your peers?

>My interest is *not* in convincing people that solutions will work. I
>want a solution that *does work*. I suppose, as a secondary goal, that I
>want people to know the truth, but that is not primary; solving the
>problem is primary. It is not supposed to be persuasive, it is supposed
>to ACTUALLY WORK. Lose sight of that and game over.

Yes, but have you actually considered the idea that you could be
wrong? That YOUR ideas may not work? Have you considered what would
happen if this were the case? As they say, two heads are better than one
(which, btw, I very much agree with). So having a few different people all
working on the Friendliness problem would be highly beneficial. If for no
other reason than it would give all of them alternate ideas to look at and
think about, which may then improve their own designs.

>I know a *lot* of AI morality concepts that sound appealing but are
>utterly unworkable. For that reason, above all else, I am scared to death
>of committees. It seems very predictable what the result will be and it
>will be absolutely deadly.

Would you please get off the committee thing. Your sounding like a broken
record because you just keep repeating the same point, over and
over. Advisory board, not committee (I should NEVER have called it such -
my bad). Everyone can always use some good advice.

>>>That said: This is a fucking stupid suicidal idea.
>>Well, alrighty then. Could you please clarify your point a bit? It
>>sounds like your reacting in a completely irrational manor, heavily
>>influenced by emotions. I don't see anything suicidal about promoting
>>Friendliness in regard to the Singularity or trying to ensure that the
>>Singularity is attempted in a reasonable and safe manner.
>
>A Friendly AI designed by a committee? Why aren't more people panicking
>over this? It sounds like the backstory of Ed Merta's "Worst-Case Scenario".

Because everyone else has READ what we've posted and get the point that NO
ONE endorses the idea of having a committee design any AI.

>And no, I will not design a Friendly AI to please a committee either. I
>will not design a Friendly AI for any purpose other than being
>Friendly. IF such a committee exists I will attempt to convince it that
>its first duty is to disband. It is inherently difficult to convince a
>committee of this, *regardless of whether it is true*, which in itself
>shows that a committee is a bad idea. Committees don't know what they
>don't know.

Its seems to be inherently difficult to convince you that your not god and
shouldn't personally be making all the decisions that will permanently seal
the fate of the human race.

>Here's an idea: Instead of convening the Committee to Fuck Up Friendly
>AI, let's convene the Committee to Decide Whether the CFUFAI Should Exist
>in the First Place, with a clear understanding that the members of
>CDWCSEFP will probably *not* serve on CFUFAI.

Huh?

>Look at the mess we have right here on SL4! You can't agree over whether
>CFUFAI should be a purely advisory organization, a small transhumanist
>organization with real powers (enforced how?), or a government committee;
>you can't agree whether or not military AI development is inherently
>frightening...

Your not reading again. I've never said I wanted it to be a government
committee and, in fact, have strongly stated the opposite. And, I'm sorry,
could you please give me a reference to ANY of my SL4 posts where I say
that military AI development would not be frightening? I'm not stupid
Eliezer. Either of those would be a horrible, horrible thing.

>The natural situation is probably as good as we're going to get. Random
>people fighting over who gets to give orders to AI projects will simply
>make things much, much worse. If you want to influence the Singularity,
>do the moral thing and devote your entire life to doing nothing else,
>thereby gaining some

Random people? You really don't listen.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT