How do you know when to stop? (was Re: Why playing it safe is dangerous)

From: Philip Goetz (philgoetz@gmail.com)
Date: Sat Feb 25 2006 - 10:26:37 MST


On 2/25/06, Ben Goertzel <ben@goertzel.org> wrote:

> At some point my colleagues and I may need to try hard to solve that
> decision problem in a more rigid sense -- if I have a powerful AGI on
> hand and I have to decide whether to set a switch that will let it
> start self-modifying and hence potentially move toward hard takeoff.
> I am not at that point now....

This is a key problem with Friendly AI, though... You have to test
your programs to learn anything, to progress towards AI. You will
have to have programs that learn, for you to progress towards AI. We
may very well reach the point where we need to build self-modifying
programs, in order to progress further towards AI, long before those
programs are actually smart enough to be dangerous.

Computer scientists always think their programs are going to be much,
much, MUCH smarter than they end up being. If we stop turning our
programs on when we think they might be smart enough to be dangerous,
we would probably be 2 decades too soon. So how are we ever to
progress?

- Phil



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT