From: BillK (firstname.lastname@example.org)
Date: Thu Aug 18 2005 - 12:06:23 MDT
On 8/18/05, Phil Goetz wrote:
> Has it occurred to you that this first AI will have been
> produced by humans, and hence "all the rivals" to be shut
> down will mean not AIs, but humans? Are you quite
> confident that we will meet this future AI's criteria for
> compassion? We don't even meet mine, and I'm human.
Or, to be clearer. Humans cannot rival an AI. Not past the first few
hours or days, anyway. This is no motivation to kill humans.
But if alternative AIs exist, then the *first* AI may have a
motivation to shut down other AIs. The first AI may not necessarily be
the friendliest to humans, however.
But every AI will try to spread backups around just to avoid this
possibility. It is likely not possible to shut down an AI once it gets
going. So the first AI will be shutting down development projects for
other AIs, rather than other AIs as such.
But any AI will have human developers, financiers, 'Black Project
teams', etc. and will be regarded as a very important asset. To
completely shut down an alternative AI project may well cause many
direct and indirect human casualties. (Nukes come to mind).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT