RE: Controlled ascent (was: Military Friendly AI)

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jun 29 2002 - 07:45:51 MDT


hi,

> As you know, I think the threshold for hard takeoff is higher
> than Ben does,
> and that the approach embodied in the Novamente manuscript I read
> will never
> get there. So what? I could be wrong on both counts. Any system with
> Turing-complete patterns optimizing themselves (that includes Eurisko)
> should have a controlled ascent mechanism.

By that logic, Eliezer, *every human brain* should have a controlled ascent
mechanism ;)

> >> 2) a mechanism for detecting a rapid rate of intelligence increase
>
> You can't have Turing-complete patterns optimizing themselves without a
> definition of optimization.

Well, yes, you can.

In most of human history, we had "turing-complete patterns optimizing
themselves" in our brains, and we did not have any definition of this...

> Whatever criterion is being used to separate
> good patterns from bad patterns, use that criterion as the metric of
> intelligence.

Novamente doesn't work quite that way... but this would get into a hard
technical discussion...

-- ben g



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT