From: Ben Goertzel (firstname.lastname@example.org)
Date: Wed Feb 22 2006 - 13:55:05 MST
> > What I suggested is that it is impossible for a program/computer
> > combination to recursively self-improve its hardware and software in
> > such a way that it can both
> > a) increase dramatically the algorithmic information of its software
> I see no reason why this is desirable. It is desirable to improve the
> veracity of one's knowledge base in ways that accurately mirror external
> reality, which may increase algorithmic information. Recursive
> self-improvement to make your internal algorithms smarter and more
> efficient at manipulating knowledge about the external world, does not
> require increasing algorithmic information. I think you are confusing
> technical algorithmic complexity with the intuitive sense of complexity.
I'm not confusing these senses of complexity.
I conjecture that achieving powerful general intelligence within
plausible computational resources involves integrating a variety of
components involving differing levels of specialization. (This is
different from AIXItl or godel machine type architectures, which are
very simple but do not operate well within plausible computational
resources.) If this is true then making a vastly more intelligent AI
may involve integrating a large number of different components, at
various levels of specialization. In this case the "knowledge about
the external world" is present in the AI system not only explicitly as
data but implicitly in the detailed design of the specialized
components. The more specialized components, the greater the
> A cellular automaton could give rise to a whole universe full of
> sentient creatures and superintelligent AIs, while still having almost
> trivial algorithmic complexity.
Yes, but it cannot do so within a brief period of time. A CA-universe
is more comparable to AIXItl than to a practical AGI architecture
(though of course both comparisons are loose ones): it gives rise to
interesting things via a kind of crude enumerative exploration of a
vast number of possibilities, until it hits on something good. (Yes,
there's more to it than that, but that is a significant aspect.)
To create a system giving rise to a universe full of sentient
creatures and superintelligent AI's within a brief period of time, I
conjecture that one would need to build a system with a pretty high
algorithmic information, not at all like a simple CA rule with a
low-algorithmic-information initial condition. This is related of
course to my conjecture that to achieve general intelligence within
feasible space and time constraints one needs relatively
high-algorithmic-information systems integrating various components at
multiple levels of specialization.
>An AI can do extraordinarily complex
> things in the service of its goals, which to humans would look like
> immense complexity on the surface, without increasing its algorithmic
> complexity beyond that of the search which looked for good methods to
> accomplish its goals. Anything you can do which predictably makes a
> program better at serving the utility function has no greater
> *algorithmic* complexity than the criterion you used to decide that the
> program would be better, even though it may look immensely more complex,
> and be immensely more efficient. Like a cellular automaton producing a
> universe, or natural selection coughing up a human, such a process can
> produce an AI of vast surface complexity and fine-tuned efficiency
> without increasing algorithmic complexity in the technical sense.
Yes, but what you are alluding to is an intelligence process that is
like AIXItl or evolutionary learning in that it is a simple algorithm
carrying out a sort of semi-exhaustive, heuristically-guided program
I think that AGI's need to have this aspect, but they also need a
whole bunch of more specialized and space-intensive code, in order
that their intelligent behavior may have reasonable time-complexity.
None of your comments are addressing the issue of tradeoffs between
space and time complexity, which I believe are conceptually
> Furthermore, if we imagine - I don't think this way, but it's the sort
> of thing you keep suggesting -
Actually, this does not sound to me like the sort of thing I keep suggesting...
>that an outside source presents a
> Friendly design plus a proof that the design is Friendly; then the AI
> can verify the proof and the outside Friendly design can have greater
> algorithmic complexity than the original AI. The original AI doesn't
> even need to keep the whole design or the whole proof in RAM, so long as
> it can keep all the intermediate results necessary to verify that each
> proof step is valid.
I will need to think about this point more, it's an interesting one.
On the face of it, it seems to me that there might be some proofs that
would lend themselves to this kind of incremental understanding, and
others that would not (the difference being the algorithmic
information of the set of intermediate results needed to be stored at
However, as you note, this is not the most likely-sounding option.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT