From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Wed Jul 13 2005 - 09:48:28 MDT
Mitchell Howe wrote:
> I am going to sound biased here, and perhaps I am, but the question of
> whether potentially unfriendly AI can be safely contained has been
> rather thoroughly shown to be "hell no."
You can't actually *close* an issue like that in advance of actual experience;
sometimes Nature surprises us despite our most clever arguments. (Skepticism
does go awry when it prevents us from guessing the guessable; we do have a
sufficient balance of advance expectation on the UFAI issue to accuse someone
of gross negligence if they fail to guess the guessable because "Nature
sometimes surprises us". Well, sometimes it doesn't. Thinking is not futile.)
The prerequisite for a Killthread is not whether the issue has been "settled",
but whether nothing *new* is being said on the subject.
> If you don't believe me, please try google searching any of the
> following phrases:
> "sl4 UFAI", "sl4 AI box", "sl4 AI Jail"
> This activity may not convince you, but you will at least see why this
> topic has already been declared a Dead Horse on the SL4 wiki page of the
> same name:
> So BLAM! already. This thread is dead.
The conversation was not an exact repetition of what had been gone before.
Killthread override, at least temporarily; something new and interesting might
get said. It's a small probability, but in the end, it's the reason the list
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jun 19 2013 - 04:01:09 MDT