From: Vladimir Nesov (email@example.com)
Date: Sat Jun 28 2008 - 16:33:56 MDT
On Sat, Jun 28, 2008 at 9:39 PM, John K Clark <firstname.lastname@example.org> wrote:
> From: "Lee Corbin" email@example.com>
>> my calculator seems to display a tremendous
>> urge to finish any computation I key into it,
>> but doesn't seem to be the least bit reluctance
>> toward being turned off or even thrown away.
>> Why do most people here appear to never to
>> entertain the idea that an AI might be rather similar?
> But most people around here do entertain that idea, it's just me who
> thinks it's bullshit. Most people on this supposedly "shocking" list
> agree with Mr. Joe Averageman on this last stand of vitalism; that only
> humans, or at least biological beings, can have the secret sauce.
Huh? Disregarding our goals, taking its own road and in the course of
it taking over our resources looks like the most natural path for an
AI with arbitrary goal. Hitting the narrow target of an AI that
furthers *our* goals is probably the hard part. Arguing that it isn't
a realistic target is understandable, but assumption that it engulfs
everything but humans isn't worth its weight in straw.
-- Vladimir Nesov firstname.lastname@example.org http://causalityrelay.wordpress.com/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT