Re: Definition of strong recursive self-improvement

From: Russell Wallace (russell.wallace@gmail.com)
Date: Sun Jan 02 2005 - 12:41:18 MST


On Sun, 02 Jan 2005 11:38:54 -0600, Eliezer S. Yudkowsky
<sentience@pobox.com> wrote:
> Thank you for your helpful explanation; go forth and implement it in an
> AI code-writing system and put all the programmers out of business.

On my to-do list ^.^

> I do not intend to achieve (it is theoretically impossible to achieve, I
> think) a 1.00000... expected probability of success. However, such
> outside causes as you name are not *independent* causes of failure among
> all the elements of a complex system.

Nor am I claiming otherwise.

> But if I knew how to build an FAI that worked so long as no one tossed
> its CPU into a bowl of ice cream, I would count myself as having made
> major progress.

Yes, I think it's safe to say that would qualify as progress alright.

Do you still believe in the "hard takeoff in a basement" scenario, though?

> Meanwhile, saying that humans use "semi-formal reasoning" to write code
> is not, I'm afraid, a Technical Explanation.

No, really? I'm shocked :) (Good article that, btw.)

If either of us were at the point of being able to provide a Technical
Explanation for this stuff, this conversation would be taking a very
different form. (For one thing, the side that had it could probably
let their AI do a lot of the debating for them!) But my semi-technical
explanation does answer the question you asked, which is how _in
principle_ it can be possible for human programmers to ever write
working code; and it therefore suffices to answer your objection that
if I was right about the problems, there could be no such thing even
in principle.

> Imagine someone who knew
> naught of Bayes, pointing to probabilistic reasoning and saying it was
> all "guessing" and therefore would inevitably fail at one point or
> another. In that vague and verbal model you could not express the
> notion of a more reliable, better-discriminating probabilistic guesser,
> powered by Bayesian principles and a better implementation, that could
> achieve a calibrated probability of 0.0001% for the failure of an entire
> system over, say, ten millennia.

This is all very well, but how do you go about applying Bayes to a
situation where the number of possible hypotheses greatly exceeds the
number of atoms in the visible universe?

How do you get a calibrated probability of failure, or even calculate
P(E|H) for a few H's, in a situation where calculating P(E|H) for one
H would take computer time measured in teraexaflop-eons, and plenty of
them?

(These are not rhetorical questions. I'm asking them because answers
would be of great practical value.)

> (For I do now regard FAI as an interim
> measure, to be replaced by some other System when humans have grown up a
> little.)

So you want to take humans out of the loop for awhile, then put them
back in after a few millennia? (Whereas I'm inclined to think humans
will need to stay in the loop all the way along.)

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT