Re: Confidence in Friendly Singularity

From: H C (lphege@hotmail.com)
Date: Fri Jun 09 2006 - 21:48:49 MDT


When it is done understanding something, what happens to that something?

It takes a lot more imagination than that to keep humanity alive.

-Hank

>From: "Indriunas, Mindaugas" <inyuki@gmail.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: Confidence in Friendly Singularity
>Date: Fri, 9 Jun 2006 20:06:41 +0900
>
>>The problem comes down to what we make the AI desire. Humans desire sex,
>>food, truth, social standing, beauty, etc. An AI might desire none of
>>these
>>things (except most certainly truth), and yet still be capable of general,
>>human level, adaptable intelligence. It wouldn't need any of the human
>>instincts indiginous to our body (although probably will be some overlap
>>with intuitional (i.e. creative) instincts).
>
>I think if the intelligence would want only TO UNDERSTAND EVERYTHING,
>it's morality will grow with the understanding acquired, and we won't
>have a problem of morality at all. Trying to understand everything, it
>will definitely at some point of awareness try to understand "What is
>good and what is bad".
>
>Inyuki



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT