A position

From: Jimmy Wales (jwales@aristotle.bomis.com)
Date: Tue May 22 2001 - 00:46:15 MDT


A person might reasonably take the position that a sufficiently
general AI to get to superintelligence will necessarily have
functional volition, in the sense of not just choosing means to ends,
but actually choosing ends as well. If so, then it is not only _not
possible_ to build-in Yudkowsky-Friendliness, it is also _not
necessary_.

We build it, then it figures out what to do.

A person might believe this, if that person believes that values can
be rationally grounded in the facts of reality, and that immorality
consists primarily in various kinds of failures of cognition.

We might think that a superintelligence will peacefully pursue it's own
enlightened self-interest... and there's nothing we should want to do
to stop it, because the result of that will be Yudkowsky-friendliness
after all.

I'm not advocating this position, I'm just throwing it out there.

It strikes me as virtually impossible to pre-program or hardwire
Friendliness, *period*.

I have a baby (a real life little girl). As she grows, I will teach
her values of reason, purpose, self-esteem, and all the detailed
principles that go into that. That's all I can hope to do.

I think that's the way our first AI's will be. We'll teach them what
we can, but pretty soon, they'll be so much smarter than us that...
it's their world.

-- 
*************************************************
*            http://www.nupedia.com/            *
*      The Ever Expanding Free Encyclopedia     *
*************************************************


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT