RE: Goertzel's _PtS_

From: Ben Goertzel (
Date: Wed May 02 2001 - 20:54:58 MDT

> > What I mean by "hard-wiring Friendliness" is placing
> Friendliness at the top
> > of the initial goal system and making the system express all
> other goals as
> > subgoals of this. Is this not what you propose? I thought
> that's what you
> > described to me in New York...
> Yes, that's what I described, but by that description *I'm* hard-wired
> Friendly, since this is one of the properties I strive for in my own
> declarative philosophical content.

No, you're just **deluded** ;>

You don't *really* have Friendliness as your ultimate supergoal ... you just
have a false self-model in which Friendliness is your ultimate supergoal!

> Depends on how good WM is. If WM is already very intelligent in
> Eurisko-like heuristic discovery and composition, and if it has enough
> computing power to handle the clustering and schema creation, feeding in
> the low-level description might be enough for WM to create an effective
> perceptual understanding of the higher-level features by examining typical
> human-written code. If WM has a strong understanding of purpose and a
> strong pre-existing understanding of vis modules' functionality (WM gets a
> "ve", by this point), then you could, conceivably, just feed in the Java
> supercompiler description and watch the thing blaze straight through a
> hard takeoff. Low-probability outcome, but very real.

I think that

        Java supercompiler description
                + computer science background (e.g. useful CS theorems encoded in Mizar)
                        + WM reasoning system

is a framework that will allow WM to gradually learn to rewrite code to be
more intelligent, starting with simple code and eventually working up to

> The Friendliness-topped goal system, the causal goal system, the
> probabilistic supergoals, and the controlled ascent feature are the main
> things I'd want Webmind to add before the 1.0 version of the AI Engine.

Causal goal system is basically there, we never finished our detailed
conversation on that.

Probabilistic supergoals are definitely there.

Controlled ascent is absent so far; I have no objection to it, but it's just
not time to worry about it yet.

Friendliness-topped goal system is a delusion ;>


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT