Re: ESSAY: Program length, Omega and Friendliness

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Feb 22 2006 - 12:45:03 MST


Ben Goertzel wrote:
>
> What I suggested is that it is impossible for a program/computer
> combination to recursively self-improve its hardware and software in
> such a way that it can both
>
> a) increase dramatically the algorithmic information of its software

I see no reason why this is desirable. It is desirable to improve the
veracity of one's knowledge base in ways that accurately mirror external
reality, which may increase algorithmic information. Recursive
self-improvement to make your internal algorithms smarter and more
efficient at manipulating knowledge about the external world, does not
require increasing algorithmic information. I think you are confusing
technical algorithmic complexity with the intuitive sense of complexity.
  A cellular automaton could give rise to a whole universe full of
sentient creatures and superintelligent AIs, while still having almost
trivial algorithmic complexity. An AI can do extraordinarily complex
things in the service of its goals, which to humans would look like
immense complexity on the surface, without increasing its algorithmic
complexity beyond that of the search which looked for good methods to
accomplish its goals. Anything you can do which predictably makes a
program better at serving the utility function has no greater
*algorithmic* complexity than the criterion you used to decide that the
program would be better, even though it may look immensely more complex,
and be immensely more efficient. Like a cellular automaton producing a
universe, or natural selection coughing up a human, such a process can
produce an AI of vast surface complexity and fine-tuned efficiency
without increasing algorithmic complexity in the technical sense.

> and
>
> b) prove, based on its initial hardware configuration (or any minor
> variation with a comparable amount of algorithmic information), that
> its far-future improved versions (with substantially greater
> algorithmic information) will be Friendly.

I don't see why future vastly improved versions need have greater
algorithmic information, aside from their improved knowledge base.

Furthermore, if we imagine - I don't think this way, but it's the sort
of thing you keep suggesting - that an outside source presents a
Friendly design plus a proof that the design is Friendly; then the AI
can verify the proof and the outside Friendly design can have greater
algorithmic complexity than the original AI. The original AI doesn't
even need to keep the whole design or the whole proof in RAM, so long as
it can keep all the intermediate results necessary to verify that each
proof step is valid.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT