Re: Building a friendly AI from a "just do what I tell you" AI

From: Thomas McCabe (pphysics141@gmail.com)
Date: Sun Nov 18 2007 - 20:33:27 MST


On Nov 18, 2007 9:56 PM, Stathis Papaioannou <stathisp@gmail.com> wrote:
> On 19/11/2007, Thomas McCabe <pphysics141@gmail.com> wrote:
>
> > How does it *know*, ahead of time, to explain it to you, rather than
> > just doing it? This kind of thing is what requires FAI engineering in
> > the first place. If you program it to tell you what it will do in
> > order to figure out the problem, it will turn the planet into
> > computronium to figure out how it will turn the planet into
> > computronium.
>
> You just explained it to me; are you suggesting that if you were
> suddenly a lot smarter you *wouldn't* be able to explain it to me?

This is a Giant Cheesecake Fallacy. Obviously, a superintelligent AGI
could explain how to build an FAI without destroying the world. The
quadrillion-dollar question is, *why* would it explain it to you and
not destroy the world, when destroying the world has positive utility
under the vast majority of goal systems? If I suddenly became much
smarter, I would be able to explain it much better, without even
thinking about destroying the world. But an OAI does not act like a
super-smart me; it acts with no human or human-like morality
whatsoever. The OAI does not rationalize up reasons why it wouldn't
destroy the world, it just does it.

> > And so on, and so forth; the problem is that the vast
> > majority of minds will see more computronium as a good thing, and will
> > therefore seek to convert the entire planet into computronium. It
> > doesn't even really matter what the specific goals of the AGI are,
> > because computronium is useful for just about anything. To quote CEV:
>
> If the goal is just the disinterested solution of an intellectual
> problem it won't do that. Imagine a scientist given some data and
> asked to come up with a theory to explain it. Do you assume that,
> being really smart, he will spend his time not directly working on the
> problem, but lobbying politicians etc. in order increase funding for
> further experiments, on the grounds that that is in the long run more
> likely to yield results? And if a human can understand the intended
> meaning behind "just work on the problem", why wouldn't a
> superintelligent AI be able to do the same?

Wasn't this already covered several years ago, by Eli & Co.? This
exact scenario, where the AGI is asked to solve a technical problem
and converts the world to computronium to get the results faster, was
covered in CFAI
(http://www.intelligence.org/upload/CFAI.html#design_generic_stomp). To
quote:

"Scenario: The Riemann Hypothesis Catastrophe
You ask an AI to solve the Riemann Hypothesis. As a subgoal of solving
the problem, the AI turns all the matter in the solar system into
computronium, exterminating humanity along the way."

"The other way to get a Riemann Hypothesis Catastrophe is to make
solving the Riemann Hypothesis a direct supergoal of the AI - perhaps
the only supergoal of the AI. This would require sheer gibbering
stupidity, blank incomprehension of the Singularity, and total
uncaring recklessness. It would violate almost every rule of Friendly
AI and simple common sense. It would violate the rule about achieving
unity of purpose, and the rule about sharing functional complexity
instead of giving orders."

>
>
>
>
> --
> Stathis Papaioannou
>

 - Tom



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT