From: Marc Geddes (firstname.lastname@example.org)
Date: Fri Feb 27 2004 - 00:12:56 MST
--- Rafal Smigrodzki <email@example.com> wrote: >
> > morality
> > My main worry with Eliezer's ideas is that I don't
> > think that a non observer-centered sentient is
> > logically possible. Or if it's possible, such a
> > sentient would not be stable. Can I prove this?
> > But all the examples of stable sentients (humans)
> > we have are observer centered. I can only point
> > this, combined with the fact that so many people
> > posting to sl4 agree with me. I can only strongly
> > urge Eliezer and others working on AI NOT to
> > the folly of trying to create a non observer
> > AI. For goodness sake don't try it! It could
> > the doom of us all.
> ### Marc, remember that every single human you have
> met is a product of
> evolution, and replicates his genes autonomously
> (not vicariously like a
> worker bee). Self-centered goal systems are a
> natural result of this
> evolutionary history. Making an FAI is however
> totally different from
> evolving it - and the limitation to self-centered
> goal systems no longer
> applies. In fact, it would be a folly to abide by
> this limitation, and
> non-observer-centered systems should have a much
> better chance of staying
> friendly (since there is no self-centered goal
> system component shifting
> them away from friendliness).
I don't regard the evolutionary arguments as very
convincing. They're based on observation, not
experiment. Besides, it's only very recently in
evolutionary history that the first sentients (humans)
appeared. It's the class of sentients that is
revelent to FAI work. Evolutionary observations about
non-sentients is not likely to say much of relevence.
In any event, I don't regard non observer based
sentients as even desireable (See my other replies).
If you strip out all observer centered goals, you're
left with normative altruism. All sentients would
converge on this, and all individual uniqueness would
be stripped away. You'd be left with bland
uniformity. An empty husk. Universal morality is
probably just a very general set of contrainsts, and
FAI's following this alone would be qute unable to
distinguish between the myraid of interesting personal
goals that are consistent with it. Everything that
didn't hurt others (assuming that Universal Morality
is volition based) whould be equally 'Good' to such an
FAI. There would be no possibility of anything
unquinely human or personal. For instance the two
outcomes 'Rafal kills himself', 'Rafal doesn't kill
humself' would be designated as morally equivalent
under Volitional Morality.
In short, totally non-observer centered FAI's just
wouldn't make interesting drinking buddies.
Now, let's get back to bashing Dr J's socialism ;)
Please visit my web-site at: http://www.prometheuscrack.com
Find local movie times and trailers on Yahoo! Movies.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT