From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Mon May 21 2001 - 19:53:28 MDT
Ben Goertzel wrote:
> > Depends on how good WM is. If WM is already very intelligent in
> > Eurisko-like heuristic discovery and composition, and if it has enough
> > computing power to handle the clustering and schema creation, feeding in
> > the low-level description might be enough for WM to create an effective
> > perceptual understanding of the higher-level features by examining typical
> > human-written code. If WM has a strong understanding of purpose and a
> > strong pre-existing understanding of vis modules' functionality (WM gets a
> > "ve", by this point), then you could, conceivably, just feed in the Java
> > supercompiler description and watch the thing blaze straight through a
> > hard takeoff. Low-probability outcome, but very real.
> I think that
> Java supercompiler description
> + computer science background (e.g. useful CS theorems encoded in Mizar)
> + WM reasoning system
> is a framework that will allow WM to gradually learn to rewrite code to be
> more intelligent, starting with simple code and eventually working up to
If that's really true, then Webmind is within a couple of years of doing a
hard takeoff. You should have had an FAI-1 system right from the
beginning, will need an FAI-2 system for 1.0 of the AI Engine, and will
need a complete, structurally conforming system *before* you get Webmind
to read a single piece of Java code, since in your description there's at
least the theoretical possibility that Webmind can do a hard takeoff
directly from there.
It so happens that I think the bar is higher and that considerably more
general intelligence is needed first. I'm just saying that, if I believed
what you believe about Webmind, I would be, literally, living in fear.
Genuine, adrenaline-edged fear. I'm nervous now because you know more
about Webmind than I do.
> > The Friendliness-topped goal system, the causal goal system, the
> > probabilistic supergoals, and the controlled ascent feature are the main
> > things I'd want Webmind to add before the 1.0 version of the AI Engine.
> Causal goal system is basically there, we never finished our detailed
> conversation on that.
Yes, we need to finish our conversation there. If you're interested in
trying to carry it on SL4:
A --> B (direct cause)
and A --> C (direct cause)
implies observe B ==> observe C (prediction)
but not do B ~~> get C (manipulation)
As far as I can tell, the current Webmind doesn't distinguish between
indirect linkages that are useful for prediction and direct linkages that
are useful for manipulation. If so, it seems to me that Webmind would not
use different representations for "B --> C, B ~~> C" and "A --> B, A -->
C, B ==> C".
> Probabilistic supergoals are definitely there.
Well, it has to be the *right* kind of probabilism - there are four
subpatterns to the design pattern of external reference semantics. I
mean, what kind of information is used to adjust the probabilities?
> Controlled ascent is absent so far; I have no objection to it, but it's just
> not time to worry about it yet.
Uh-huh. So, Ben, did you make your airline flight from the Foresight
Forget about whether you desperately need it yet. Do it because you can.
Say to yourself: "Hey, I can build in this Friendliness feature!"
> Friendliness-topped goal system is a delusion ;>
Are you going to tell me that a belief "I'd give myself a
Friendliness-topped goal system if I had the chance" is also delusion?
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jun 19 2013 - 04:00:34 MDT