From: Ben Goertzel (firstname.lastname@example.org)
Date: Sun Feb 25 2001 - 06:22:59 MST
> There are no military applications of superintelligence.
> -- -- -- -- --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
Eliezer, is this cryptic utterance intended as
a) a statement summarizing your knowledge about current research efforts in
[in which case it applies to the present but not necessarily the future]
b) a philosophical statement, roughly to the effect that Unfriendly AI can't
[in which case you'd be wrong; I doubt this is what you mean]
c) a statement that once a system becomes REALLY SUPERINTELLIGENT it will no
any motivation to serve any particular country above any other one...
[My guess as to your intended meaning]
Let's assume your meaning is c). Then I have a question for you. Which is:
Why couldn't the
CIA create a self-modifying AI whose supergoal was "Serve the USA." I.e.,
"Be Friendly to the USA."
You posit that the supergoal "Be Friendly to Humans" can remain a fixed
point throughout the successive
reason-driven self-modification events that will constitute the path from
initial AI to superintelligent AI.
But, I'm not seeing why the supergoal "Serve the USA" couldn't serve as an
equally adequate supergoal, from
the perspective of the development of intelligence. Things like "learn" and
"gather information" and
"survive" are subgoals of "Serve the USA" just as they are of "Be Friendly
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT