Re: [sl4] Evolution, personality and altruism

From: Petter Wingren-Rasmussen (petterwr@gmail.com)
Date: Mon Nov 24 2008 - 13:35:58 MST


On Mon, Nov 24, 2008 at 5:45 PM, Aaron Miller <apage43@ninjawhale.com> wrote:
> One important difference between evolution of AIs and "natural"
> organisms is the fact that natural organisms indefinitely die and
> -must- reproduce to actually continue, as a species or DNA sequence,
> to compete for resources. An AI can improve itself -without-
> reproducing, and can "live" indefinitely. In this context, the
> competition between invidual AI programs almost runs parallel to
> competition between entire species in the natural world.

Exactly. One AI can expand to use the entire available internet (or
whatever equivalent we have when the AI arises).
Then there wont be an evolution that is similar to any previous one.
If this happens the fate of our civilization will be more dependent
upon one personality than ever before. I dont think what happens after
that will be similar to any evolution before this point.

On Mon, Nov 24, 2008 at 10:53 AM, Matt Mahoney <matmahoney@yahoo.com> wrote:
> If AIs reproduce, modify themselves, and compete for computing resources (materials and energy), then they will evolve. If AIs are smarter than us, then it will be them that apply selective pressure to us, not the other way around. We aren't at the top of the food chain any more.
>
> Is this a risk? What is your opinion of the extinction of homo erectus, or viewed another way, its evolution into homo sapiens?

I wouldnt call this development a risk. I'd view it as a natural
consequence if AIs with superhuman intelligence ever are developed
(and I think they will be).
An asocial AI that dont think human life even has a sentimental value,
or for some reason wants to extinguish all biological life is a risk
in my opinion.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT