From: Ben Goertzel (firstname.lastname@example.org)
Date: Tue Aug 16 2005 - 22:39:47 MDT
> > Bottom Line:
> > It is all about there being a threshold level of understanding of
> > motivation systems, coupled with the ability to flip switches in ones
> > own system, above which the mind will behave very, very
> differently than
> > your standard model human.
Well, let's suppose this is correct.
I.e., let's suppose for sake of argument that once an AI gets smart enough
(whatever that means, exactly) then it becomes sufficiently "one with the
universe" or whatever that it becomes a good guy -- an "AI Buddha" as I once
Still, then, there is one minor problem: How do you know the AI won't
accidentally or meretriciously annihilate us measly little humans at some
point in its terrible twos or teenage years, along the path to
I see it as plausible (though by no means demonstrated) that a sufficiently
intelligent system may necessarily become creative rather than
destructive -- a pro-pattern force in the cosmos. But, I don't see how this
provides any real security for us humans, being as we are patterns both
highly fragile and highly particular...
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT