Re: Seed AI (was: How hard a Singularity?)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jul 02 2002 - 23:01:42 MDT


Samantha Atkins wrote:
> Eliezer S. Yudkowsky wrote:
>
>> You can only build Friendly AI out of whatever building blocks the AI
>> is capable of understanding at that time. So if you're waiting until
>> the AI can read and understand the world's extensive literature on
>> morality... well, that's probably not such a good idea. (In the
>> foregoing sentence, for "not such a good idea" read "suicide".) Once
>> the AI can read the world's extensive literature on morality and
>> actually understand it, you can use it as building blocks in Friendly
>> AI or even try to shift the definition to rest on it, but you have to
>> use more AI-understandable concepts to build the Friendliness
>> architecture that the AI uses to get to that point.
>
> Eliezer,
>
> That last sentence didn't parse. Want to have another go at it? It
> is not clear to me that the architecture required to understand world
> literature and philosophy regarding morality is necessarily an
> architecture already based on, steeped in, supporting Friendliness. My
> apologies if my attempted parsing is to far from what you intended.

Necessarily? Only if you value the human species. An AI advanced
enough to understand world literature and philosophy regarding morality
is an AI more than advanced enough that you would be hellishly late in
your Friendship work if you tried to get started on it then. Probably
just too late, period. To have any hope of doing it right, Friendliness
should be there from the very, very beginning. And that means that in
the very, very beginning, Friendliness content will be built from those
concepts that can be understood by an AI in the very, very beginning.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT