From: Nick Tarleton (firstname.lastname@example.org)
Date: Mon Dec 01 2008 - 14:48:12 MST
On Mon, Dec 1, 2008 at 4:10 PM, Charles Hixson
> Stuart Armstrong wrote:
>>> Certainly it would be possible to design AIs with such goals. I think it
>>> would be rather silly to do so, however.
>> Killing humans is not the big risk - lethal indifference to humans is
>> the big risk.
> I think you've missed my point.
Even absent maximization, power + indifference is horribly dangerous.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT