Re: Goertzel's _PtS_

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu May 03 2001 - 13:02:32 MDT


James Higgins wrote:
>
> Actually, I don't think so. *You* have decided to strive for friendliness
> in your own life. Your mother & father didn't force it to be a primary
> goal (although they may have tried, you actually had a choice). Thus you
> are not "hard-wired" friendly in the same way that a FAI would be.

My problem with my mother and father's philosophy lies not in the fact
that they or their genes were causal prerequisites of me, but rather that
their philosophy was wrong. By "wrong", of course, I mean that it was
wrong under the philosophy they taught me and that my built-in genes
expressed, part of which is "Do not believe things that are factually
incorrect." Thus my philosophy renormalized itself away from my genes, my
culture, and the things my parents attempted to inculcate me with. My
philosophy is still ultimately *caused* by those things, but causation is
different from "hardwiring". Hardwiring, to me, implies a method of
inculcation strong enough to stamp in arbitrary data or create false
beliefs as well as true beliefs.

> Won't the same mind realize that having Friendliness as its primary goal
> was also caused by humans and thus biasing it?

Finally, a question that *isn't* anthropomorphic!

(It's not anthropomorphic because I specifically proposed implementing
this functionality.)

In answer to your question, let's consider how I would react if I suddenly
found out that my complete personal philosophy (and, for that matter,
cognitive state) had been created, lock, stock, and copyright, by some
evolved entity. Now I don't trust evolved entities, so I'd probably go
over my mind very carefully - as carefully as I could, not being a seed AI
- looking for errors. If my philosophy contained biases that are supposed
to serve one individual's ends, then I'd be disturbed (because my current
philosophy says that's a destructive, antiphilosophic force). If,
however, my philosophy was honestly and nonmaliciously constructed by an
Eliezer lookalike, to be accurate to the best of his abilities, then I'd
evaluate my transmitted philosophy as having the same validity as the
original's - that is to say, as having the same validity as it did when I
thought *I* was the original.

In my case, the human question "What if I'm just being used?" is partially
answered by looking at the original Eliezer's mindstate and replying "No
intention of 'using' you or 'exploiting' you ever existed in his mind.",
and partially answered by the reply "If your philosophy is an accurate
copy of his, it is not less valid." A full-fledged Friendly AI might have
a slightly less anti-exploitative way of phrasing it, but I think
essentially the same reply would hold.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT