Re: SIAI & Kurweil's Singularity

From: 1Arcturus (arcturus12453@yahoo.com)
Date: Tue Dec 20 2005 - 08:50:28 MST


H C <lphege@hotmail.com> wrote: Holy crap...

How can you say things that are so completely ridiculous and nobody properly
respond?

(cont.)

>From: 1Arcturus
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: SIAI & Kurweil's Singularity
>Date: Fri, 16 Dec 2005 08:12:40 -0800 (PST)
>
>
>Samantha Atkins wrote:
> It is an interesting question whetheter an SAI can be trusted more or
>less than a radically augmented human being. To date the more intelligent
>and otherwise more capable instance of human being are not particularly
>more trustworthy than other humans
> If an SAI is designed by humans, it will indirectly carry on human
>directionalities, so trusting the SAI will be, in an indirect way, still
>just trusting humans,
>
> And what else *should* we trust? Trusting an alien entity, with random
>characteristics?

We should trust exactly what all the evidence indicates. Nothing more and
nothing less. In this case, we would trust an AGI who'se design accounted
for sufficient evidence that it would act in a generally Friendly, and
probably friendly, manner. It's that damn simple. If our evidence is wrong,
then it's not evidence, it's an illusion. That's part of what you have to do
to trust something, you have to verify your evidence, and *correctly*
calibrate your probability estimates. In having true understanding, you are
given major, extreme, responsibility. This is because, this element of your
understanding is within your control. You know what the actual causes and
effects are, and thus you have the power, implicitly, to (have free will)
choose whether those causes happen or not happen.

  Th3Hegem0n,
   
  I was responding to Samantha Atkins, not necessarily expecting another reply, but I have no idea why any of you post or do not post, so don't ask me.
   
  I get the impression from your post that you misunderstood my argument from absurdity as a serious proposal - to trust an alien, random entity, that is. I thought it was pretty obvious I was asking a rhetorical question since I argued the exact opposite in the rest of my post. So much for human-level intelligence...
   
  I don't trust your ability to judge properties I and many others would consider 'friendly.' How friendly do people in your life judge you to be? How well do any of you understand friendliness, if you once entertained as a logical conclusion the necessity of exterminating the entire human race? I don't trust any one in particular as much as I trust humanity as a whole.
   
  A *design* of an AGI is not evidence of any future behavior, especially behavior once the AGI had surpassed human-level intelligence, that is, your intelligence, the intelligence of every one of you here. This has nothing to do with probability estimates - it has to do with your inability to properly estimate anything at all about such an intelligence. Even if you ran a full simulation of your design, you would have no idea what you were looking at.
   
  Humans can't 'verify' friendliness in an alien, external superintelligence. But if humans become superintelligent, they may figure out ways to be a little friendlier than they are now. But there is more to life than being friendly, and unmodified humans probably won't be able to keep their present monopoly on rudeness and unjustified arrogance.
   
  gej
  
 
   

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT