Re: Progress, and One road or many to AI?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Sep 11 2003 - 13:01:55 MDT


James Rogers wrote:
>
> (For those that don't know what "convergence" translates into, it
> essentially is the measure of the ability of a system to discover and
> efficiently encode complex patterns in an arbitrary system. The
> resource roll-off is the result of efficient high-order models
> automatically being generated as it is exposed to data, classic
> Kolmogorov compression. In a sense, it measures the ability of a given
> system to grok the essence of another system it is modeling in some
> finite amount of space, in a very pure mathematical fashion. In the
> case of my software, actual performance is now very close to the
> theoretical limit in this regard.)

James, can you briefly put down the actual math of the theoretical limit
you're talking about?

One of the things us AGI researchers need to worry about is that the
conceptual distance between ourselves and others is much greater than we
expect, even when we are trying to be aware of this problem and take it
into account. When I got a closer, more technical look at Novamente it
was almost nothing like I'd imagined it from Ben Goertzel's other
descriptions - if I'd written a summary of what I thought was going on in
Novamente, I doubt that Ben Goertzel would have been able to recognize any
element of it as his own. Similarly, when Ben Goertzel is talking about
"Levels of Organization in General Intelligence" or Friendliness, I have
yet to see anything that I would recognize as similar to my own ideas if
it were not being attributed to "Eliezer Yudkowsky". It is just very hard
for us AGIfolk to constrain other AGIfolk to construct mental imagery that
is anything like what is in our own heads. For example, the term "mental
imagery" has a meaning to me that is probably totally unlike what it has
to Ben Goertzel.

I don't have any idea of what's going on in your AI, except that it has
something to do with Solomonoff induction. (My internal model
distribution of possible AIs is inadequately constrained by your
environmental inputs so far.) I visualize you generating short programs
and investigating the match of their predicted probability distributions
to the environmental input. But which theoretical limit on convergence
are you talking about?

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT