Re: Unfriendly AI in "Positive Transcension"

From: Eliezer S. Yudkowsky (
Date: Sat Feb 14 2004 - 20:13:16 MST

Eliezer S. Yudkowsky wrote:
> Ben wrote:
> >
>> [...] once it has enough intelligence and power, will simply seize all
>> processing power in the universe for itself. I think this
>> “Megalomaniac AI” scenario is mainly a result of rampant
>> anthropomorphism. In this context it’s interesting to return to the
>> notion of attractors. It may be that the Megalomaniac AI is an
>> attractor, in that once such a beast starts rolling, it’s tough to
>> stop. But the question is, how likely is it that a superhuman AI will
>> start out in the basin of attraction of this particular attractor? My
>> intuition is that the basin of attraction of this attractor is not
>> particularly large. Rather, I think that in order to make a
>> Megalomaniac AI, one would probably need to explicitly program an AI
>> with a lust for power. Then, quite likely, this lust for power would
>> manage to persist through repeated self-modifications – “lust for
>> power” being a robustly simple-yet-abstract principle. On the other
>> hand, if one programs one’s initial AI with an initial state aimed at a
>> different attractor meta-ethic, there probably isn’t much chance of
>> convergence into the megalomaniacal condition.
> 1) Are you aware of how important this question is?
> 2) Are you aware of the consequences if you're wrong?


3) How well do you think you understand this problem?

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT