RE: guaranteeing friendliness

From: Herb Martin (HerbM@LearnQuick.Com)
Date: Sat Dec 03 2005 - 14:22:09 MST


From: Michael Wilson
> Herb Martin wrote:
> >> ...it's fairly futile to try and evaluate what
> >> wildly transhuman intelligences can and can't do
> >
> > Exactly.
> >
> > After the Singularity we have no real hope of predicting
> > friendliness -- or knowing what initial conditions will
> > necessarily favor such.
>
> You're making a beginner mistake: you're confusing the ability
> to predict what an intelligence will /do/, with the ability
> to predict what it will /desire/. If we could predict exactly
> what an AGI will actually do then it wouldn't have transhuman
> intelligence. Fortunately predicting what the goals of an
> AGI system will be, including the effects of self-modification,
> is a much more tractable (though very hard) endeavour.

Other than making the claim that I am a beginner, or
making beginner mistakes, without providing evidence
or logic you have above claimed that 'actions' cannot
be predicted while claiming that 'desires' can be;
again with evidence or logic.

I will leave this without argument in the hope that
the silly back and forth without new information
being added will cease.

Allow those (who still care) following the discussion
to decide for themselves is such makes the least bit
of sense, until or unless someone can provide evidence
for such claims.

> > Beyond the singularity conditions is unknowable territory
> > (thus the name), and preceding the Singularity are competing
> > groups of human beings with different goals and ideas of
> > friendliness.
>
> The whole idea of Eliezer's CV proposal is to produce an end
> result that is effectively the best compromise that everyone
> would agree on, if everyone was vastly more intelligent. This
> may or may not actually work, but it's worth trying as the
> 'best possible' answer to 'who's FAI do we implement' question.
> Failing that, the question comes down to the judgement of
> whoever actually builds the first seed AI, so I hope whoever it
> is manages to instantiate a world not too disagreeable to us.

"Best compromise" makes sense, but guarantees do not.

So I would have no trouble agreeing the above paragraph in
a general way.

But do note, that the odds greatly favor some organization
like the NSA, the Pentagon, or a major computer/software
manufacturer in the race to the first SEED AI, and thus
to rampant intelligence development.

This doesn't mean we shouldn't be interested or shouldn't
offer suggestions, but we should be realistic.

And if an individual or small team is the first (or perhaps
primary is a better term) to develop rampant AI then we
must hope, as you suggest, that it leads to a comfortable
and interesting new world.

Given the choice, many would have trouble deciding just
between those two: comfortable OR interesting.

Think it through....

--
Herb Martin
>  
>  * Michael Wilson
> 
> 
> 
> 		
> ___________________________________________________________ 
> Yahoo! Model Search 2005 - Find the next catwalk superstars - 
> http://uk.news.yahoo.com/hot/model-search/
> 


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT