Re: [sl4] Weakening morality

From: Vladimir Nesov (robotact@gmail.com)
Date: Tue Feb 10 2009 - 15:21:07 MST


On Wed, Feb 11, 2009 at 12:47 AM, Petter Wingren-Rasmussen
<petterwr@gmail.com> wrote:
>
>
> On Tue, Feb 10, 2009 at 3:09 PM, Vladimir Nesov <robotact@gmail.com> wrote:
>>
>> Any hardcoded
>> laws that can't be unrolled are part of AI's morals, you can't
>> substitute them with anything, there is nothing better from AI's
>> perspective, there are by definition no weaknesses. Saying that there
>> is something better assumes external evaluation, in which case AI
>> should be nearly perfectly optimal, no eternal crutches, or you are
>> paperclipped.
>
> I dont understand what you mean here. What I mean with "better" is "more
> likely to survive in the long run" - i dont place any normative value in the
> word in this context - "more fit" might have been a better term to use
> maybe?
>

"Survival" depends on utility with which you measure it. From one
utility's perspective, you see the Future as surviving, while from a
different utility's perspective you see it as destroyed, with
purposeless automatons filling the existence. Whenever you measure the
outcome, you need a criterion to measure it against.

-- 
Vladimir Nesov
http://causalityrelay.wordpress.com/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT