RE: Goertzel's _PtS_

From: Ben Goertzel (ben@webmind.com)
Date: Mon May 21 2001 - 20:23:58 MDT


> > I think that
> >
> > Java supercompiler description
> > + computer science background (e.g. useful CS
> theorems encoded in Mizar)
> > + WM reasoning system
> >
> > is a framework that will allow WM to gradually learn to rewrite
> code to be
> > more intelligent, starting with simple code and eventually working up to
> > itself
>
> If that's really true, then Webmind is within a couple of years of doing a
> hard takeoff.

Well, first of all, if we don't get some funding in the next few months,
progress on Webmind will experience an even greater slowdown than it already
has, due to all of us needing to get other jobs.

Second, my intuition differs from yours. I think it is going to take
several years to get from

Point A) Webmind starts to analyze Java code

Point B) Webmind can meaningfully and usefully improve its own Java code

And note, please, that I'm a well-known overoptimist ;>

> You should have had an FAI-1 system right from the
> beginning, will need an FAI-2 system for 1.0 of the AI Engine, and will
> need a complete, structurally conforming system *before* you get Webmind
> to read a single piece of Java code, since in your description there's at
> least the theoretical possibility that Webmind can do a hard takeoff
> directly from there.

Yes, there's the theoretical possibility, I agree with that.

However, we still don't agree on the architecture of a Friendly goal
system...

> It so happens that I think the bar is higher and that considerably more
> general intelligence is needed first. I'm just saying that, if I believed
> what you believe about Webmind, I would be, literally, living in fear.
> Genuine, adrenaline-edged fear. I'm nervous now because you know more
> about Webmind than I do.

Heck, I know more about it than I did four months ago ;>

> Yes, we need to finish our conversation there. If you're interested in
> trying to carry it on SL4:
>
> A --> B (direct cause)
> and A --> C (direct cause)
> implies observe B ==> observe C (prediction)
> but not do B ~~> get C (manipulation)
>
> As far as I can tell, the current Webmind doesn't distinguish between
> indirect linkages that are useful for prediction and direct linkages that
> are useful for manipulation. If so, it seems to me that Webmind would not
> use different representations for "B --> C, B ~~> C" and "A --> B, A -->
> C, B ==> C".

You don't understand how we represent these things in Webmind, and I don't
have time to explain tonight. Hopefully I will find time to do so later
this week. Begging for money in various ways becomes awfully
time-consuming!!

> Uh-huh. So, Ben, did you make your airline flight from the Foresight
> Gathering?

Indeed

>
> Forget about whether you desperately need it yet. Do it because you can.
> Say to yourself: "Hey, I can build in this Friendliness feature!"

A surplus of programmer time is not my problem at the moment

> > Friendliness-topped goal system is a delusion ;>
>
> Are you going to tell me that a belief "I'd give myself a
> Friendliness-topped goal system if I had the chance" is also delusion?
>

You'd regret it afterwards ;p

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT