Re: Building a friendly AI from a "just do what I tell you" AI

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Sun Nov 18 2007 - 19:56:20 MST


On 19/11/2007, Thomas McCabe <pphysics141@gmail.com> wrote:

> How does it *know*, ahead of time, to explain it to you, rather than
> just doing it? This kind of thing is what requires FAI engineering in
> the first place. If you program it to tell you what it will do in
> order to figure out the problem, it will turn the planet into
> computronium to figure out how it will turn the planet into
> computronium.

You just explained it to me; are you suggesting that if you were
suddenly a lot smarter you *wouldn't* be able to explain it to me?

> And so on, and so forth; the problem is that the vast
> majority of minds will see more computronium as a good thing, and will
> therefore seek to convert the entire planet into computronium. It
> doesn't even really matter what the specific goals of the AGI are,
> because computronium is useful for just about anything. To quote CEV:

If the goal is just the disinterested solution of an intellectual
problem it won't do that. Imagine a scientist given some data and
asked to come up with a theory to explain it. Do you assume that,
being really smart, he will spend his time not directly working on the
problem, but lobbying politicians etc. in order increase funding for
further experiments, on the grounds that that is in the long run more
likely to yield results? And if a human can understand the intended
meaning behind "just work on the problem", why wouldn't a
superintelligent AI be able to do the same?

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT