From: Brian Phillips (firstname.lastname@example.org)
Date: Mon Jul 08 2002 - 08:24:03 MDT
On Mon, 8 Jul 2002 07:41:29 -0500 Mike & Donna Deering <email@example.com>
<<Would it be possible to change the design slightly to
avoid volition, ego, self consciousness, while still maintaining the
capabilities of complex problem solving, self improvement, super
intelligence? Basically a tool level SAI, super intelligence under the
control of a human. I can't think of anyone, or any government, I would
trust with that kind of power. Super intelligence without super power
ethics is a real problem.
I not an AI guy so I couldn't reply to the
question as such. I will argue that (right or
wrong)this is the form of AI that humanity's
leaders *think* it needs most.
The danger in TrAI is precisely that volition,
"ego", and self-consiousness.
From the viewpoint of a layman outsider I would
argue that volition, self-conciousness, the
capacity for ethics, is by far the HARDEST
part of AI, and I question whether an AI's
ego-analogue would even be distinguishable to a
human observer (... these are Alien Beings yes?)
without some seriously highpowered technical
interventions. Then again this may not be a
problem if you feel that Singularity Survival
requires a (nearly)omnipotent philosopher-king.
A non-sentient AI (for lack of a better word)
seems like a natural progression on the way
to a sentient AI. If non-sentient AI is
dangerous in your view, then hurry up with the
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT