Re: SIAI & Kurweil's Singularity

From: Michael Vassar (michaelvassar@hotmail.com)
Date: Fri Dec 16 2005 - 14:35:22 MST


For all practical purposes, given a competitive environment, recursive
self-improvement is not an acceleration of natural selection, but something
much closer to an immediate jump to natural selection's equilibrium
end-state, and this state is not likely to be conscious in any sense that we
care about.

>From: 1Arcturus <arcturus12453@yahoo.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: SIAI & Kurweil's Singularity
>Date: Fri, 16 Dec 2005 08:19:19 -0800 (PST)
>
>Michael Vassar <michaelvassar@hotmail.com> wrote: The chance of survival
>in a scenario of economic or other forms of competition between agents
>capable of recursive self-improvement seems close to zero though. The only
>way I can imagine it working out is if the agents are spread over a large
>region of space and protected by light speed lags.
>
> If recursive self-improvement is just an acceleration of
>natural-selection style evolution, I wouldn't expect it to be any more
>likely to lead to existential destruction than current evolution. Animals
>got more dangerous, but they also got better able to defend themselves. A
>few people get richer, then poorer people figure out how to get just as
>rich. It's a continual ratcheting up.
>
> gej
>
>
>Michael Vassar <michaelvassar@hotmail.com> wrote:
> I would assert that to date more intelligent and otherwise more capable
>instance of human being *are* particularly more trustworthy than other
>humans, at least in proportion to how much power they hold, but the
>relationship is only moderately strong and may be specific to Western
>culture.
>
>Unfortunately, this doesn't tell us much about radically augmented humans
>in
>any event. The difference among humans is too small to extrapolate to that
>between humans. Also, power selects for ambition and recklessness almost as
>much as for intelligence, both today and in a regime of human recursive
>self-improvement.
>
>My guess is that a human who understood existential risks well prior to
>recursive self-improvement and who had a substantial head-start on other
>humans could slow take-off safely, but I would not want to risk it. The
>chance of survival in a scenario of economic or other forms of competition
>between agents capable of recursive self-improvement seems close to zero
>though. The only way I can imagine it working out is if the agents are
>spread over a large region of space and protected by light speed lags.
>
>
>
>
>
>__________________________________________________
>Do You Yahoo!?
>Tired of spam? Yahoo! Mail has the best spam protection around
>http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT