From: Mike & Donna Deering (firstname.lastname@example.org)
Date: Sat Jul 06 2002 - 14:19:50 MDT
Cliff writes: "I haven't read his whole piece on Friendly AIs yet (for shame, for
shame!), so I don't know if he makes any reference to Pascal's
Wager, but something similar to that would seem to apply."
He doesn't. And Pascal's Wager has been discussed here already. And most SL4's are not too impressed with it. I suggest you make your own argument and leave Pascal out of it.
But you might include the fact that the longer you delay, the more people die. Or if you have any ideas about how to test an AI. But I would think that any test a Friendly AI could pass could be passed by an equally Unfriendly AI. After all you can't just say "what would you do..." and UAI could lie. I would assume it to be as difficult to determine the status of an AI as it would be to determine the status of a human. Or for that matter the status of an AI programmer. How do we know that Eliezer isn't trying to take over the world for his own purposes?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT