From: Peter Voss (firstname.lastname@example.org)
Date: Wed Jul 13 2005 - 21:13:16 MDT
Something I've been meaning to comment on for a long time: Citing paperclips
as a key danger facing us from AGI avoids the real difficult issue: what are
realistic dangers - threats we can relate to and debate?
It also demotes the debate to the juvenile; not helpful if one wants to be
I'd love to hear well-reasoned thoughts on what and whose motivation would
end up being a bigger or more likely danger to us.
For example, what poses the bigger risk: an AI with a mind of its own, or
one that doesn't.
What are specific risks that a run-of-the-mill AGI poses?
From: email@example.com [mailto:firstname.lastname@example.org]On Behalf Of Eliezer
Sent: Wednesday, July 13, 2005 6:29 PM
Subject: Re: Fighting UFAI
Tennessee Leeuwenburg wrote:
> What do people suppose the goals of a UFAI might be? Other than our
> destruction, of course. I'm assuming that UFAI isn't going to want our
> destruction just for its own sake, but consequentially, for other
I usually assume paperclips, for the sake of argument. More realistically
UFAI might want to tile the universe with tiny smiley faces (if, as Bill
Hibbard suggested, we were to use reinforcement learning on smiling humans)
most likely of all, circuitry that holds an ever-increasing representation
the pleasure counter. It doesn't seem to make much of a difference.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT