RE: Fighting UFAI

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Jul 13 2005 - 21:27:58 MDT


> Something I've been meaning to comment on for a long time: Citing
> paperclips
> as a key danger facing us from AGI avoids the real difficult
> issue: what are
> realistic dangers - threats we can relate to and debate?
>
> It also demotes the debate to the juvenile; not helpful if one wants to be
> taken seriously.
>
> I'd love to hear well-reasoned thoughts on what and whose motivation would
> end up being a bigger or more likely danger to us.
>
> For example, what poses the bigger risk: an AI with a mind of its own, or
> one that doesn't.
>
> What are specific risks that a run-of-the-mill AGI poses?
>
> Peter

Hmmm...

As a more realistic alternative to paperclips, consider the possibility of a
superhuman AI that holds "advancing science, mathematics and technology" as
its ultimate goal. Such an AI might well want to pulverize humans so as to
use their mass-energy as computing material that will lead it to greater
discoveries and inventions.

Or, consider an AI that wants to "make humans happy" (probably among other
goals), but proceeds to work toward this goal by transforming humans into
fundamentally nonhuman minds that are, however, happier. (This is basically
the idea of Jack Williamson's classic novel "The Humanoids", which everyone
on this list presumably knows.)

Or an AI that wants to "keep humanity as it is, unless it wants to change by
its free choice" and thus institutes a kind of fascism, preventing humans
from becoming transhuman, because it interprets "free choice" a bit
differently from (future) humans ...

We don't need to posit evil AI's or AI's with absurd goals like tiling the
universe with smiley-faces to see that superhuman AI's pose a lot of
potential dangers to the future of humanity.

These sorts of dangers don't arise if one has a "run-of-the-mill AGI"; they
arise if one has an AGI that has drastically superhuman powers. The reason
these dangers seem worth discussing is that many of us on this list believe
that

a) a human-level AGI is plausible during the next few decades (maybe much
less)

b) once human-level AGI is reached, superhuman AGI is not far off due to
recursive self-modification

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT