From: Bryan Bishop (email@example.com)
Date: Wed Feb 20 2008 - 21:54:49 MST
On Wednesday 20 February 2008, Daniel Burfoot wrote:
> The scenario I'm most afraid of is not a hard take-off leading to
> unfriendly AGI, but a pseudo-AI falling into the hands of evil men.
Your plans should not be based around hoping that certain people do not
do certain things ... but instead on engineering the required
redundancy and certainty that you want into the system. So hoping
that 'evil' men don't get a hold of it is not a good idea. Maybe you
should start a project so that whatever you dislike about such a
scenario has much less of an impact than you currently assume?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT