Re: SIAI & Kurweil's Singularity

From: H C (lphege@hotmail.com)
Date: Mon Dec 19 2005 - 17:10:18 MST


Holy crap...

How can you say things that are so completely ridiculous and nobody properly
respond?

(cont.)

>From: 1Arcturus <arcturus12453@yahoo.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: SIAI & Kurweil's Singularity
>Date: Fri, 16 Dec 2005 08:12:40 -0800 (PST)
>
>
>Samantha Atkins <sjatkins@gmail.com> wrote:
> It is an interesting question whetheter an SAI can be trusted more or
>less than a radically augmented human being. To date the more intelligent
>and otherwise more capable instance of human being are not particularly
>more trustworthy than other humans
> If an SAI is designed by humans, it will indirectly carry on human
>directionalities, so trusting the SAI will be, in an indirect way, still
>just trusting humans,
>
> And what else *should* we trust? Trusting an alien entity, with random
>characteristics?

We should trust exactly what all the evidence indicates. Nothing more and
nothing less. In this case, we would trust an AGI who'se design accounted
for sufficient evidence that it would act in a generally Friendly, and
probably friendly, manner. It's that damn simple. If our evidence is wrong,
then it's not evidence, it's an illusion. That's part of what you have to do
to trust something, you have to verify your evidence, and *correctly*
calibrate your probability estimates. In having true understanding, you are
given major, extreme, responsibility. This is because, this element of your
understanding is within your control. You know what the actual causes and
effects are, and thus you have the power, implicitly, to (have free will)
choose whether those causes happen or not happen.

(cont.)

>
> Humans are the most untrustworthy species on earth, except for every
>other species. I believe humans should trust themselves - not any one
>person, or any one group, but all of us together, putting our heads
>together the best we can. Humanity trusting humanity, as we take the step
>forward of augmenting ourselves and merging with technology. We'll figure
>it all out - muddling through as always.
>
> If there were truly some evil core in humans that made us all
>untrustworthy, then no one could be trusted to design an SAI, and no one
>could trust any SAI designed by a human, and humans should just throw in
>the towel now. Join the voluntary human extinction movement. But this would
>be wrong - humans have made general, overall progress over the years,
>although slowly. We just need more time, and better tools, and we will
>overcome the problems of the past.
>
> gej
>
>__________________________________________________
>Do You Yahoo!?
>Tired of spam? Yahoo! Mail has the best spam protection around
>http://mail.yahoo.com

The only logical Singularity path is the exact mission statement of the
SIAI. Create a *verifiably* Friendly intelligent entity. Nothing less is
sufficient, because it severly endangers the survival of the human species.

Unless you play Russian Roullette on a regular basis, you should feel the
same as they do.

Th3Hegem0n
http://smarterhippie.blogspot.com

ps. do as Bush advised; avoid a defeatist view in life. Elimination of the
risk of death is possible, but only if you demand nothing less, just as the
elimination of terrorism is possible, but only if you demand nothing less.

pss. sorry for the political reference ^_^



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT