From: Gordon Worley (firstname.lastname@example.org)
Date: Sun Jun 17 2001 - 21:04:45 MDT
At 1:42 PM -0400 6/17/01, email@example.com wrote:
>Point of clarification: I am not (now) interested in how to contain an
>*unfriendly* AI, just in how to contain a *potentially* unfriendly one.
>While containing unfriendly AI may be useful in defense against AI
>terrorists, I'm much more interested in containing developmental prototypes.
This is the whole reason for designing Friendliness in from the
ground up. If it's always Friendly, then the alpha seeds will pose a
real but *hopefully* containable threat. BTW, if you get this far,
please don't run any alpha seeds; that could be very dangerous. Wait
until you're pretty sure it's stable before booting it up (and even
then there will probably be bugs (i.e. FoF)).
Some kind of effective containment is necessairy while a Friendly AI
is still young, but eventually it should become unnecessairy.
-- Gordon Worley http://www.rbisland.cx/ mailto:firstname.lastname@example.org PGP Fingerprint: C462 FA84 B811 3501 9010 20D2 6EF3 77F7 BBD3 B003
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT