From: Ben Goertzel (firstname.lastname@example.org)
Date: Tue Sep 20 2005 - 06:59:49 MDT
Thanks for the very clear overview of the your position, which generally
seems reasonable to me.
In fact your approach does not seem all that different than my own,
although our AI approaches are different.
My main difference with Eli in terms of the philosophy of AGI
development seems to be that I have little faith in the power of
pure theory to tell us how to make a safe AI; and I tend to believe that
experimentation with "AI children" in simulated environments is going
to teach us a lot that will be useful in figuring out how to make a
However, since you (like me) are actually working on an AI system now
in parallel with your theory efforts, it seems your attitude in practice
is a little bit closer to my own.
It's not just that I "want to get to the fun part of making an AGI"
like you suggest, it's that I'm skeptical that the pure-theory approach
will work for resolving these issues, and I think there is more to be
learned via a combination of theory and experiment.
> 5. Any system compatible with the known approaches to strong verification
> of Friendliness will need to be consistently rational, which is to say
> Bayesian from the ground up and have the structural property of being
> 'causally clean', although not necessarily driven by expected utility.
> When I first accepted these constraints, they seemed onerous to the point
> of making a tractabale architecture impossible; all the 'powerful'
> techniques I knew of (improved GAs, stochastic codelets, dynamic-toplogy
> NNs, agent systems etc) were thoroughly probabilistic* and hence difficult
> to use or completely unusable. But after a period of research I now
> believe that there are acceptable and even superior replacements for all
> of these that are compatible with strong verification of Friendliness.
> I'm not going to defend that as anything more than a personal opinion at
> this time.
Well, your penultimate sentence is a big claim indeed.
I will add this though: I know how to make a Novamente system "causally
clean" at the cost of dramatically increasing its memory and speed
requirements. Basically, you just need to make it keep a complete
record of every inference it does internally on a cognitive level, so it
can be run as (to oversimplify) a reversible inference engine coupled with
a nonreversible evolutionary hypothesis generator. Then it can apply its
own reasoning capability to reason about the causes and consequences
of all its actions, and can maintain a causally clean goal system.
However, running a Novamente like this in the near term would make
testing almost impossible as it would vastly increase the amount of
RAM and the number of computers required.
In this regard, the question from a Novamente perspective is how close
you could come to causal cleanliness via retaining and studying only a
limited portion of inferential history. But that is not what we are
focusing now because we are focusing on getting the system to be at all
intelligent in the first place.
> 12. Finally, my objection to claims about the value of Complexity theory
> were summed up by one critic's comment that "Wolfram's 'A New Kind of
> Science' would have been fine if it had been called 'Fun With Graph
> Paper'". The field has produced a vast amount of hype, a small amount
> of interesting maths and very few useful predictive theories in other
> domains. Its proponents are quick to claim that their ideas apply to
> virtually everything, when in practice they seem to have been actually
> useful in rather few cases. This opinion is based on coverage in the
> science press and would be easy to change via evidence, but to date
> no-one has responded to Eliezer's challenge with real examples of
> complexity theory doing something useful. That said, general opinions
> such as this are a side issue; the specifics of AGI are the important
In fact I did respond to that email of his, weeks ago, but I do't feel like
digging up my old response ;)
I am a big fan of complexity science and not a huge fan of Wolfram's book; I
agree with that criticism of the latter.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT