From: Ben Goertzel (email@example.com)
Date: Sun Feb 15 2004 - 10:26:54 MST
> Eliezer S. Yudkowsky wrote:
> > Ben wrote:
> > >
> >> [...] once it has enough intelligence and power, will simply seize all
> >> processing power in the universe for itself. I think this
> >> “Megalomaniac AI” scenario is mainly a result of rampant
> >> anthropomorphism. In this context it’s interesting to return to the
> >> notion of attractors. It may be that the Megalomaniac AI is an
> >> attractor, in that once such a beast starts rolling, it’s tough to
> >> stop. But the question is, how likely is it that a superhuman AI will
> >> start out in the basin of attraction of this particular attractor? My
> >> intuition is that the basin of attraction of this attractor is not
> >> particularly large. Rather, I think that in order to make a
> >> Megalomaniac AI, one would probably need to explicitly program an AI
> >> with a lust for power. Then, quite likely, this lust for power would
> >> manage to persist through repeated self-modifications – “lust for
> >> power” being a robustly simple-yet-abstract principle. On the other
> >> hand, if one programs one’s initial AI with an initial state aimed at a
> >> different attractor meta-ethic, there probably isn’t much chance of
> >> convergence into the megalomaniacal condition.
> > 1) Are you aware of how important this question is?
> > 2) Are you aware of the consequences if you're wrong?
> 3) How well do you think you understand this problem?
Only moderately well.
And, I have seen no evidence that you or anyone else understands it
Please note, one theme that I continually repeat is the need for research on
these things. My view is that there is a mathematical theory of such
phenomena out there, and my hope is that we can find it via experimenting
with simple AGI's and via use of scientist/mathematician-assistant AGI's....
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jun 19 2013 - 04:01:00 MDT