From: Lee Corbin (email@example.com)
Date: Sat Aug 04 2007 - 14:25:57 MDT
> On Sat, Aug 04, 2007 at 08:44:20PM +0300, Mindaugas Indriunas wrote:
>> > How can one make an AI system that modifies and improves itself,
>> > yet does not lose track of the top-level goals with which it was
>> > originally supplied?
> [Mindaugas replied]
>> Improves itself? Don't we have to improve us all human beings as
>> an already-existing highest level AI (the world-wide society) by
>> improving communication between our brains, instead of creating
>> something *else* to improve it*self* rather than to improve us?
And why not try for both? They're hardly mutually exclusive.
> We've tried that; if it's not patently obvious to you by now that
> humans aren't smart enough to run the world, I suggest you spend
> some time with your local newspaper.
Perhaps a bit too much idealism here? That can happen when
one habitually compares the real against the ideal---instead of
(properly) comparing the ideal against the real. Humans
are doing quite a bit better job than would any other existing
animals I know of; compare our homicide rates with
those of other primates, or even our safety nets with those
of other animals in general. And besides our amazing
peacefulness, there's always the fact that the other specials
are doing hardly anything at all to advance the singularity.
> The point is to create a Really Powerful Optimization Process
> whose goal *is* to improve us. (summarizing there)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT