Re: [extropy-chat] Two draft papers: AI and existential risk; heuristics and biases

From: Mikko Särelä (msarela@cc.hut.fi)
Date: Tue Jun 13 2006 - 11:15:20 MDT


On Tue, 13 Jun 2006, Jef Allbright wrote:
> On 6/12/06, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
> > (4) I'm not sure whether AIs of different motives would be willing to
> > cooperate, even among the very rare Friendly AIs. If it is *possible*
> > to proceed strictly by internal self-improvement, there is a
> > *tremendous* expected utility bonus to doing so, if it avoids having
> > to share power later.
>
> Eliezer, most would agree that there are huge efficiencies to be gained
> over the evolved biological substrate, but I continue to have a problem
> with your idea that a process can recursively self-improve in isolation.
> Doesn't your recent emphasis on perception being the perception of
> difference (which I strongly agree with) highlight the contradiction and
> the enormity of the "if" in "if it is *possible* to proceed strictly by
> internal self-improvement"?

Internal workings of a system are also part of the percieved reality. One
can test out another algorithm for indexing data and notice that it works
better. Completely internally. And still percieving the difference. Or one
could prove that a certain algorithm for searching data is more efficient
than another. And self-improve. The software and hardware are part of the
reality.

-- 
Mikko Särelä	http://thoughtsfromid.blogspot.com/
    "Happiness is not a destination, but a way of travelling." Aristotle 


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT