RE: All sentient have to be observer-centered! My theory of FAI morality

From: Rafal Smigrodzki (rafal@smigrodzki.org)
Date: Thu Feb 26 2004 - 13:33:50 MST


Marc wrote:
> morality
>
>
> My main worry with Eliezer's ideas is that I don't
> think that a non observer-centered sentient is
> logically possible. Or if it's possible, such a
> sentient would not be stable. Can I prove this? No.
> But all the examples of stable sentients (humans) that
> we have are observer centered. I can only point to
> this, combined with the fact that so many people
> posting to sl4 agree with me. I can only strongly
> urge Eliezer and others working on AI NOT to attempt
> the folly of trying to create a non observer centered
> AI. For goodness sake don't try it! It could mean
> the doom of us all.

### Marc, remember that every single human you have met is a product of
evolution, and replicates his genes autonomously (not vicariously like a
worker bee). Self-centered goal systems are a natural result of this
evolutionary history. Making an FAI is however totally different from
evolving it - and the limitation to self-centered goal systems no longer
applies. In fact, it would be a folly to abide by this limitation, and
non-observer-centered systems should have a much better chance of staying
friendly (since there is no self-centered goal system component shifting
them away from friendliness).

Rafal



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT