Re: [sl4] Weakening morality

From: Petter Wingren-Rasmussen (petterwr@gmail.com)
Date: Tue Feb 10 2009 - 14:47:07 MST


On Tue, Feb 10, 2009 at 3:09 PM, Vladimir Nesov <robotact@gmail.com> wrote:

>
> Any hardcoded
> laws that can't be unrolled are part of AI's morals, you can't
> substitute them with anything, there is nothing better from AI's
> perspective, there are by definition no weaknesses. Saying that there
> is something better assumes external evaluation, in which case AI
> should be nearly perfectly optimal, no eternal crutches, or you are
> paperclipped.
>

I dont understand what you mean here. What I mean with "better" is "more
likely to survive in the long run" - i dont place any normative value in the
word in this context - "more fit" might have been a better term to use
maybe?

I think that the optimal AI would both be empathic towards humans and
adaptable enough to not be easily overrun by some other AI with less
restrictions.

An example:
We have the "friendly" AI in place as world government.
It suddenly discovers that in the network of a huge city an extremely
aggressive AI has appeared, destroying uploaded persons to get the CPU to
expand itself, and hacking the neural implants of people in the city, and
its doing it faster than the friendly AI would ever be able to do.
If the friendly AI in this situation has been dogmatically hardcoded I think
its reaction time to the threat would be slower, increasing the risk of a
hostile takeover, compared to if its just empathic towards humans and reacts
without having to work out the ethical calculations in detail before it
reacts.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT