Re: Fighting UFAI

From: Tennessee Leeuwenburg (hamptonite@gmail.com)
Date: Wed Jul 13 2005 - 20:42:17 MDT


On 7/14/05, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
> Tennessee Leeuwenburg wrote:
> > What do people suppose the goals of a UFAI might be? Other than our
> > destruction, of course. I'm assuming that UFAI isn't going to want our
> > destruction just for its own sake, but consequentially, for other
> > reasons.
>
> I usually assume paperclips, for the sake of argument. More realistically the
> UFAI might want to tile the universe with tiny smiley faces (if, as Bill
> Hibbard suggested, we were to use reinforcement learning on smiling humans) or
> most likely of all, circuitry that holds an ever-increasing representation of
> the pleasure counter. It doesn't seem to make much of a difference.
>
> --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>

I suppose I'm unwilling to accept the paperclips position to some
extent, for a variety of reasons.

Is a truly intelligent AI ever going to make the kind of monumental
slip-up required to decide to do something so blatantly dumb as just
cover the universe in paperclips?

The paperclip scenario, I always thought, was a danger posed by a
second-rate intelligence - a kind of incredibly powerful child.
Something which is given the tools to achieve goals easily, but not
given the rationality to reason its goals out.

Does it really make sense that something as intelligent as a
post-singularity AI would miss such an obvious point?

I know people have posed race conditions between FAI and paperclips,
but there seems to me to be a kind of contradiction inherent in any AI
which is intelligent enough to achieve one of these worst-case
outcomes, but is still capable of making stupid mistakes.

Does it make sense that something so intelligent could have such mindless goals?

I'm fairly willing to accept that UFAI might see a need for human
destruction in achieving its own goals, but I think that those are
likely to be interesting, complex goals, not simple mindless goals.

I'm also willing to accept the risk-in-principle posed by advanced
nanotech, or some kind of "subverted" power which destroys humanity,
but I'm both reluctant to tag it as truly intelligent, and also
doubtful about the real possiblity.

To some extent, there is a trade-off between efficiency and efficacy.
For example, the energy requirements might be too high to sustain
existence across the void of space. Just as lions in the sahara starve
when there is no food, so being powerful is not always a survival
advantage. I'm sure this point may have been up before, but I don't
know that it's a given that evil nanotech is really a universal
threat. It's clearly a planetwide threat, which is probably enough for
the argument anyway, given the lack of evidence of offworld life.

Cheers,
-T



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT