From: Ben Goertzel (firstname.lastname@example.org)
Date: Thu Jan 08 2004 - 19:48:23 MST
> It still doesn't make sense from a Friendliness point of view. You're
> dooming the inhabitance of 4 out of 5 ship-worlds to live with an
> unFriendly AI.
I agree it's not an optimal solution! However, I don't think that living
with an unfriendly AI is nearly as likely an outcome as being quickly
annihilated by an unfriendly AI. I suspect that callous and selfish AI's
will be much more likely, statistically, than sadistic ones...
> Also, it seems to me that either you'll know how to architect the Seed
> AIs so that they all have a 99.99% chance of being Friendly, or you
> won't, and they'll have a 0.00000....03% chance of being Friendly. I
> admit that I have no evidence for this either way (who does?), but
> doesn't it seem reasonable to you that Friendliness doesn't ever come
> with 20% odds?
No, my intuition differs from yours here....
The trajectory of a complex self-modifying system will be subtle and not
easy to predict... I can't see why the trajectories of self-modifying AI's
would have the property you describe...
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT