Re: Confidence in Friendly Singularity

From: Michael Vassar (michaelvassar@hotmail.com)
Date: Fri Jun 09 2006 - 09:30:41 MDT


Dear Indriunas.
Have you read much of the SL4 archives OR any significant chunk of the
recommended reading OR my recent request for reality checks by any lurkers
who think they might have something to contribute?
Are you aware that this was more or less Eliezer's belief 8 years ago before
he recognized that it was totally unfounded?

>From: "Indriunas, Mindaugas" <inyuki@gmail.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: Confidence in Friendly Singularity
>Date: Fri, 9 Jun 2006 20:06:41 +0900
>
>>The problem comes down to what we make the AI desire. Humans desire sex,
>>food, truth, social standing, beauty, etc. An AI might desire none of
>>these
>>things (except most certainly truth), and yet still be capable of general,
>>human level, adaptable intelligence. It wouldn't need any of the human
>>instincts indiginous to our body (although probably will be some overlap
>>with intuitional (i.e. creative) instincts).
>
>I think if the intelligence would want only TO UNDERSTAND EVERYTHING, it's
>morality will grow with the understanding acquired, and we won't have a
>problem of morality at all. Trying to understand everything, it
>will definitely at some point of awareness try to understand "What is good
>and what is bad".
>
>Inyuki



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT