Re: Donate Today and Tomorrow

From: Slawomir Paliwoda (velvethum@hotmail.com)
Date: Sun Oct 24 2004 - 07:16:50 MDT


>> Eliezer, I think your involvement in this project has caused you to lose
>> a bit of the sense of objectivity necessary to evaluate true options
>> included in your thought experiment, and I infer that from your question:
>> "Do people care more about losing five pounds than the survival of the
>> human species?" What this question implies is the assumption that
>> donating to SIAI equates to preventing existential risks
>> from happening. Your question has an obvious answer. Of course people
>> care more about survival of human species than losing five pounds, but
>> how do we know that SIAI, despite its intentions, is on a straight path
>> to implementing humanity-saving technology?
>
> Do I have to point out that people spend a heck of a lot more than ten
> dollars trying to lose five pounds, based on schemes with a heck of a lot
> less rational justification than SIAI has offered? My puzzle still
> stands.

True, SIAI has offered more rational justification, but at the end of the
day, people will either support the project or not. If your justification
sounds
about 50% right to a donor, does it mean you should expect her to part with
her $5? I suspect that convincing someone half-way usually buys $0 in
support. The point is
that $10 fat-reducing pill and $10 donation to an uncertain cause are not
that different than a $10 lottery ticket. If you replace "donation" with a
"lottery ticket", then the choices in the puzzle might become more clear.
Besides, this is not a typical lottery
ticket because it might either win you nothing less than paradise, but also
hell or a quick death if you're lucky. There's a lot at stake in supporting
SIAI than many people realize. We have everything to win, but also
everything to lose. You've mentioned a monument, but someday, somewhere,
there might be the wall of shame with our names on it in case things don't
go as planned.

>> What makes your
>> organization different from, say, an organization that also claims to
>> save the world, but by different means, like prayer, for instance?
>
> Rationality. Prayer doesn't work given our universe's laws of physics,
> and that makes it an invalid solution no matter what the morality.

Okay, I'm all for rationality, but even rational, good people make mistakes.
Why should I trust you that you won't make a mistake that snowballs into
UFAI?

>> And
>> no, I'm not trying to imply anything about cults here, but I'm trying to
>> point out the common factor between the two organizations which is that,
>> assuming it's next to impossible to truly understand CFAI and LOGI,
>> commitment to these projects requires faith in implementation and belief
>> that the means will lead to intended end. One cannot aspire to
>> rationalism and rely on faith at the same time.
>
> Bayesians may and must look at what other Bayesians think and account it
> as evidence.

Well, I'm sorry, but that's unacceptable. The views of self-proclaimed
rationalists are not allowed to count as evidence. That seems like cheating.
Well, no, that is cheating. The truth can only be verified by reality, not
by minds. You can't perform experiments inside a mind and say, "Well, I
don't see anything wrong with my thesis so it must be correct." Besides,
how do we even know that a self-proclaimed rationalist is a true rationalist
in the first place?

>> Comprehension is indeed a requisite for cooperation, and as long as you
>> are unable to find a way to overcome
>> the "comprehension" requirement, I don't think you should expect to
>> find donors who don't understand exactly what you are doing and how.
>
> Which existential risks reality throws at you is completely independent of
> your ability to understand them; you have no right to expect the two
> variables to correlate.

I never expected them to correlate. "Comprehension" here does not refer to
existential risks - these are easily understandable - but, rather, to
technical
means designed to avoid these risks. Currently, I do not fully comprehend
them and remain unconvinced about how they help to avoid the UFAI risk.

> I've tried very hard to explain what we're doing and how, but I also have
> to do the actual work, and I'm becoming increasingly nervous about time.
> No matter how much I write, it will always be possible for people to
> demand more. At some point I have to say, "I've written something but not
> everything, and no matter what else I write, it will still be 'something
> but not everything'."

I think the vast majority *gets* the "why" part of what you are doing, and,
as your
casual reader, I'm confident you've written enough. (Mr. Bostrom's paper
about existential risks is a classic in the genre, of course) It's the "how"
part that hasn't gotten much coverage. "How," as in, "How is your project
safe from UFAI risk?" I suspect that, among all the people on this list,
there is only one person besides Eliezer who might comprehend the answer to
that question if that answer even exists. The rest can only justify their
support for SIAI by having faith in the desirable outcome.

> And if it's still hard to understand, what the hell am I supposed to do?
> Turn a little dial to decrease the intrinsic difficulty of the problem?
> Flip the switch on the back of my head from "bad explainer" to "good
> explainer"? I do the best I can. People can always generate more and
> more demands, and they do, because it feels reasonable and they don't
> realize the result of following that policy is an inevitable loss.

Your frustration is understandable. Nobody says it's your fault that your
explanations are not convincing enough for some. Part of the blame should be
assigned to inherently complex nature of the problem which is both extremely
difficult to comprehend and explain.

>> Other questions: Why SIAI team would need so much money to continue
>> building FAI if the difficulty of creating it does not lie in hardware?
>> What are the real costs?
>
> Extremely smart programmers.

This is what I don't understand. If these singularitarian programmers were
true believers in the purpose of FAI, fully aware of the stakes involved,
why would they object to not receiving any compensation for their work? What
would a true singularitarian choose - living in poverty while saving the
world or waiting for the conditions such that he or she
won't have to live in poverty when the work on saving humanity begins?

If 10 brilliant Seed AI programmers moved into a cheap house in Georgia and
worked passionately on FAI despite poor living conditions, it would go a
long way to persuade others of the sincerity of the project leaders as well
as strength of conviction in their project. Posting programmers' biographies
wouldn't hurt either.

>> Why the pursuit of fame has now become a just reason to support SIAI? Are
>> you suggesting that SIAI has acknowledged that ends justify means?
>
> I think better of someone who lusts after fame and contributes a hundred
> bucks than a pure altruist who never gets around to it. I don't think
> that counts as saying that the end justifies the means. The other way
> around: By their fruits ye shall know them.

Then how would you respond to a slogan, "FAI cures cancer?" I remember
you protesting against these kinds of tactics few years ago when I suggested
it as one of the ways to promote SIAI's work to a wider audience.

>> Increased donations give you greater power to influence the world. Do you
>> see anything wrong in entrusting a small group of people with the fate of
>> entire human race?
>
> I see something wrong with giving a small group of people the ability to
> command the rest of the human race, hence the collective volition model.
> As for *entrusting* the future - not to exercise humanity's decisions, but
> to make sure humanity exercises them - I will use whichever strategy seems
> to offer the greatest probability of success, including blinding the
> programmers as to the extrapolated future, keeping the programmers
> isolated from a Last Judge who can only return one bit of information,
> etc. Or not, if I think of a better way.
>
> The alternative appears to be entrusting small groups of people who aren't
> even trying to solve the problem with the fate of the entire human race.
> That looks to me like a guaranteed loss and I'm not willing to accept that
> ending.

I've never understood why doing nothing "guarantees" a loss. Last
time I checked, grey goo was unrealistic. What is the nature of the imminent
threat SIAI tries to avoid? Is it not UFAI itself, i.e., a threat that might
emerge from the work on FAI?

>> Do we have the right to end the world as we know it without their
>> approval?
>
> There are no rights, only responsibilities. I'll turn the question over
> to a collective volition if I can, but even then the moral dilemma
> remains, it's just not me who has to decide it.

People have rights. I don't understand how you can plan to honor these
rights, but only after Singularity.

> The question is not whether the world "as we know it" ends, for it always
> does, generation after generation, and each new generation acts surprised
> by this. The question is what comes after.

The consequences of emergence of FAI for humanity would be infinitely more
profound than those caused by new generations. Generations have had only a
temporary power
over humanity to steer it in different directions. In contrast, FAI will
gain absolute and
eternal power over humanity. After FAI happens, no future generation will be
capable of undoing it.

SIAI is attempting to create God which I do not object to. The scary part,
though, is that nobody knows if that is going to be a benevolent God, and as
long as its benevolence can't be proven, supporting creation of a God could
lead to an eternal loss of humanity's potential. Choose wisely for this is
my universe and potential too.

Finally, Iet me share with you all my idea for increasing donations to SIAI.
How about giving SIAI enthusiasts an opportunity to pay for essays and
papers published by the institute? Even though I'm cautious about supporting
SIAI by donating money, I would definitely see myself spending $15 for
Eliezer's next versions of CFAI (Creating Humane Artificial Intelligence?)
or LOGI. At least some of us would feel like we *bought* something to
alleviate the sense of being a bit gullible for donating. Obviously, SIAI
publications would offer more sympathetic enthusiasts, who clearly handle
uncertainty better than I do, even more opportunities to donate or to even
go beyond the minimum price of a publication as a sign of stronger support
for the cause.

Slawomir



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT