From: Mike Dougherty (email@example.com)
Date: Thu Oct 23 2008 - 20:22:48 MDT
On Thu, Oct 23, 2008 at 7:52 PM, Matt Mahoney <firstname.lastname@example.org> wrote:
> --- On Thu, 10/23/08, Toby Weston <email@example.com> wrote:
>> Just in case we do, deep down, want to kill all humans.
>> Perhaps we should add a hardcoded caveat to the friendliness
>> function, that puts all basline, pre-posthuman, homo sapiens
>> off limits to the AGI god's meddling. Let the Amish live
>> whatever happens.
> Wouldn't it be easier (or at least, have a higher probability of getting the expected result) if we just ban AI?
To clarify - is the "expected result" to kill all humans or not? I
thought we wanted AI to be smart enough to protect us from other
eventual AI as well as the myriad non-AI ways humanity could wipe
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:01:37 MDT